The most common use for Seeing AI in the classroom will likely be reading text -- short, long, typed, or handwritten -- for students who are blind or have a visual impairment. However, the multiple channels open up many possibilities for use in and out of the classroom. Kids can start by taking pictures of their teachers and classmates and teaching the tool their names. Use the scanner to identify books from the classroom library or to differentiate between packaged food items at a class celebration. Or capture a scene with the scene reader to get a mental map of the layout of the classroom, gym, office, or playground. For younger students with or without a visual impairment, the tool could be a good fit for building vocabulary or identifying objects and colors: Just choose the channel you want, point your device at an object, and hear it described. This would work on a word wall, too, a novel way for kids to learn sight words.
Students can also work together to analyze content and websites for readability and use that data to write to companies and request better accessibility features. This serves to educate all students in some small way about what students with impairments experience, but more importantly, it empowers them to become advocates.
The developers are open about the fact that the tool is still in development, and they welcome suggestions for improvement. With this in mind, getting kids involved in design solutions might be a cool avenue for teachers to explore.Continue reading Show less
Seeing AI uses a combination of technologies to narrate text, people, objects, and scenes for blind or visually impaired kids. Main channels include a short text reader, document reader, scanner, and person identifier. Experimental channels include the scene, color, and handwriting readers. There are tutorials for every channel, but the tool's not hard to figure out. Just choose a channel, point your device at what you'd like to hear narrated, and tap the screen or wait for the capture.
Some channels guide students toward a target via sounds that increase in frequency as they get closer. Others, such as the document reader, would benefit from a similar feature -- it would help if there was a sound or voice guiding students toward the edges of a page, for instance. Screen reading can be unreliable, especially when pictures, text, and captions are mixed together; if a student doesn't have the device trained on one section, it may read text from anywhere on the page, interfering with intelligibility. Also, there's a way to pause but no way to go back, so if you miss something in a narration, there's no option to have just a part of the text reread.
Seeing AI has some definite advantages over other tools that assist blind or visually impaired students. The number of channels available allows students to use the tool in place of what previously took several different ones. Because it allows kids to recognize people, objects, colors, text, and scenery, there's a level of independence the tool promotes that can be life-changing for students who've grown accustomed to more limited resources.
Although Seeing AI was designed for people with a particular disability in mind, others can benefit as well. Kids with autism might like the people recognition tool, and kids who struggle with text-based tasks might benefit from narration. This is one of its most reliable features: While it's a little more robotic than some optical screen recognition (OCR) tools, it's portable and reads all kinds of text with a surprising degree of accuracy. Finally, younger learners can learn about objects, colors, vocabulary, and sight words using the same tool that their classmates who struggle with visual impairment are using. While they wouldn't be using the technology in the same way, students can help one another see things differently, promoting a level of collaboration and inclusivity they might not otherwise experience.