Clothing and fashion are significant forms of communication ubiquitous to every lifestyle in every culture. Yet fashion is a visual language. Thus, those with vision impairments have a distinct disadvantage of not seeing the norms and extremes of fashion nuances on others but having most everyone else see (and judge) their fashion choices.
As discovered in a qualitative study, clothing presents problems in two significant areas: objective (such as garment color or washing instructions) and subjective (such as appropriateness for an occasion). From this study and a follow-up online survey, we are exploring two assistive technology solutions that will also be universally appealing: a wearable technology for tagging clothes with the objective information and a social collaboration project connecting sighted people with those with vision impairments for advice on the subjective information. Through these projects we hope to remove some of the accessibility barriers and alleviate the stress, angst, and memory load that clothing currently presents for many with vision impairments.
RFID Clothing Tags
We explored implementing a wearable technology solution that uses a sensor and scanning system similar to the EZ Pass system. We sewed small washable passive RFID tags into garments and used a reader (built with the Arduino prototyping platform) to speak information about the clothing that has been programmed for that embedded tag. With this solution users can quickly scan their clothes and receive base information such as garment color and washing instructions. This was intended to lessen the burden of memorizing wardrobes and using additional insufficient tagging systems such as safety pins.
Crowdsourcing Fashion Advice
Clothing decisions are often made collaboratively as many choices must often be confirmed by a sighted person. There may be instances where the companion is not available, however, or the person would really like a second (or third) opinion. For this we explored a special “clothing instance” of the iPhone application VizWiz. This application allows users to take a photo, speak a question, and receive a response in real-time from a sighted volunteer or Web worker.
Using our own set of volunteers, we explored using this application to address the more subjective questions about fashion such as “Does this shirt match these pants?” or “Does this tie go with this suit?” It is also a good venue for questions potentially embarrassing questions about visual flaws such as stains since this is an anonymous system. We have piloted this solution with promising results (ACM ASSETS 2012 paper). From this work we are continuing to explore ways in which the fundamental concept of crowdsourcing, as well as the current applications to facilitate crowdsourcing, can be applied to dispensing subjective clothing advice.
This project is sponsored by Toyota TEMA, who announced this project at the 2015 NFB Convention.
Many researchers have created solutions to aid people with vision impairments with indoor and outdoor navigation and there are many products on the market. However, there are still several obstacles to overcome. This research project is taking a “bottom-up” approach where we first understand the current state of navigation from the perspective of people with vision impairments then use this to inform the design of a mobile navigation aid. Similar to the accessible fashion work, we are hoping to address visual questions, this time in the context of navigation.
In the first year of the project, we conducted several exploratory and participatory studies that included interviews, observations, focus group, and diary study collections with blind participants as well as interviews with Orientation and Mobility Specialists. In the second year, we conducted partner observations (with a sighted and blind companion navigation team) and Wizard of Oz prototype testing. In the third year we have continued Wizard of Oz and observation study findings more specific to the product intended to be created by the Toyota team. We have also conducted several online studies including reflections of attending several blindness related conventions. Along with understanding device needs and requirements, our findings have included deeper and detailed descriptions of the breadth of potential users and scenarios, and observations of communication style differences between blind and sighted navigators.