This project was sponsored by pathVu, a local Pittsburgh startup who is focused on creating tools to make navigation more accessible, and by SWPPA, an organization in Southwestern Pennsylvania that advocates for the needs of older adults in the area.
We set out with the mission of increasing the accessibility of pedestrian travel, particularly for people with disabilities. We created a navigation service ecosystem where people of all abilities are incentivized and able to contribute meaningful information about the accessibility of routes and buildings. This information can be utilized by people with disabilities to make informed travel decisions so that they are able to stay safe, comfortable, and complete their trips as planned. We knew that our process would involve designing a mobile application, but we were unsure what exactly the app would do and how it would do it.
During our exploration phase of the project, we learned a lot about how people with various capabilities plan and execute trips. We utilized walk-along studies to learn about how people use navigation tools to complete new and regular trips. We even adapted our contextual inquiry protocol to be remote, which enabled us to “walk along” with people we would not otherwise have been able to learn from. From these sessions we learned about the importance of the pre-planning stage, where people weigh various criteria to decide on a transportation method and route. These criteria can range from the accessibility of the building to the current weather. Crucially, we began to understand how accessibility is different for every person we spoke to, how “accessible” is always subjectively defined and context-dependent.
More specifically, let’s imagine a door to a doctor’s office with steps in the front. This entrance may be accessible to an older adult on days where they have low back pain and are using their cane. However, on days where one of these conditions isn’t met, the entrance isn’t accessible for this person. Furthermore, perhaps the steps are wet due to rain, which discourages this person from using the steps - again, meaning that the entrance isn’t accessible for this person. The resulting insight is that “accessible” is determined by people on a contextual basis, making use of their own specific criteria. Therefore, we realized the tool that we created must allow users to make navigation decisions using their own criteria, as they are the experts on how they travel best. It was our job to empower these users with enough accurate and up-to-date information to make the best possible navigation decisions.
“Call a restaurant; they say it’s accessible, then you get there and there’s 1 step to get in... their view of accessibility is somewhat different than what yours is.” - wheelchair user
With this requirement in mind, we investigated methods of collecting the data through several rounds of interviews and prototype testing. While speaking to people with disabilities, we learned that we couldn’t rely only on this population to contribute information on navigation accessibility. There simply weren’t enough willing contributors to provide data for the greater pool of users who want to use this data. From here, we began to design ways to engage able bodied people as well as people with disabilities in the data collection process. One of our more fun ideas was "Chasing Monsters" (shown below), an AR game whose objective was to capture virtual monsters in their habitats, which were various sidewalk obstacles, like cracks. We found that the game format was polarizing - some people really liked the idea, and some people said that they would never play it. Additionally, from this and other prototypes, we learned that people didn’t know what conditions were important to report. This difficulty was caused by the fact that none of our users had accessibility needs that matched those of the large population of users who want to use an accessible navigation service.
Either people had specific accessibility needs that weren’t generalizable to the total population of users, or they didn’t have any specific accessibility needs - which also wasn’t generalizable to all users. Because of this mismatch between data needs, we began to design education modules for data contributors to use. Ultimately, we found that it was imperative to be as direct and specific as possible when outlining what to report. After a few iterations of testing these education modules, we incorporated example photos and even showed what didn’t need to be reported alongside the conditions that did need to be reported. With these guidelines in place, data contributors felt confident they could report useful data for the navigation system.
Taking this reporting structure one step further, we realized the importance of systematizing the information. We created a taxonomy of route and building conditions which showed how these conditions were categorized, how they related to one another, and how people ought to think about reporting them. Once this taxonomy was established, we imported it into the application so that education modules and reporting flows used this same information architecture. In short, our users loved this - they found it easier to conceptualize various route conditions and thus report them accurately.
Minimizing confusion on the data contribution front allowed us then to focus more on how people would be able to use this information for their navigation needs. We found through our research that people like to view the accessibility of the route as well as the building during their pre-planning phase. In short, people want to know exactly what to expect before embarking on their journey - as most navigation horror stories we heard were caused due to a mismatch between expectations and reality. Thus, we built in the functionality to view both path conditions, like uneven sidewalks and cracks, as well as building conditions, like steps and automatic doors. Furthermore, we created the “Smart Route” feature, which determines the best possible route for the specific user, utilizing their personal travel criteria inputted during the onboarding process. So, if a user has trouble going up inclines and wants to avoid them, the app could instruct the user to take the bus one stop past the destination, in order to avoid an incline. Instead of making route recommendations simply based on shortest distance, the “Smart Route” feature takes personal criteria into consideration to find the route best-suited to the user’s preferences and capabilities.
At this point, we had an understanding of what accessibility information people need to navigate successfully; and we had an understanding of what information people needed to report accessibility information for others to use. Now it came time to design a full package which incentivizes users to contribute data and help other users to navigate in a world that wasn’t built with accessibility in mind. We used a series of speed-dating protocols and semi-structured interviews to learn about what would incentivize people to contribute data that didn’t directly benefit themselves. We came up with two results: people enjoy feeling like they’re helping others and they want to see the positive results of their actions; and people will go out of their way to complete a task if they are rewarded, even minimally. We first used this insight to help design our progress-monitoring portion of the app, where data contributors can see how many people have used the information they’ve provided. Users loved this ability to see and quantify the impact of their actions, which in turn encouraged them to contribute more.
Sometimes the awareness of one’s positive impact on their community isn’t enough to spur (regular) engagement with an app of this sort. Sometimes people need incentives that provide clear benefits for themselves, if they are going to be convinced to spend their time helping others. The most requested incentive option for contributors was money - seriously everybody wanted this. Since paying people for their contributions was not economically viable, we sought the next best option. From our research, we found that offering gift cards, discounts, and small prizes from local businesses would incentivize people almost as much as money.
With this insight in mind, we crafted our three-stage plan for establishing and expanding the pathVu navigation ecosystem. The first phase focuses on leveraging community events to collect a base layer of usable data for the app, this attracts early users and demonstrates what the service is capable of. In the second phase, we introduce the systematic reCAPTCHA-style method of collecting data by asking users to identify certain path and building conditions within a series of images. These quizzes are placed in existing user flows (like making an online order from a local restaurant), where users are naturally motivated to complete the transaction and due to familiarity, are not averse to the reCAPTCHA-style quiz format. Finally, in the third phase of expansion, the pathVu app is front and center. Since there has already been a foundation of data and a general awareness of the service established, the app can survive on its own. Some of these users will only use the navigation service, some will only contribute data, and some will make use of both types of app functionality.
Building this navigation ecosystem with the currently-designed capabilities results in a robustly helpful system. However, there are some straightforward ways to continue to grow and improve upon the current functionality. First, batch reporting is a highly requested feature that would allow the user more freedom when contributing data, as we learned through a diary study that some users prefer to document path conditions throughout the day, but submit these conditions at the end of the day in one go. Enabling this pattern of behavior expands the rate at which data can be integrated into the system, and then used by other users.
Next, AI-powered photo categorization is a feature that would significantly increase the amount of data contributed by users. Users could simply snap a photo, upload it, and not worry about applying tags and other information, since the algorithm could take care of that step. Training highly effective AI can be difficult, but the pathVu system is already designed to make this transition as seamless as possible. Currently, users take a photo, upload it, then apply any and all relevant information to the photo. Therefore, the system already has all the necessary information for training, it is now a question of utilizing the data for such training. In fact, the system could even modify the submission process to capitalize on the opportunity for training. Instead of receiving the photo and descriptive information at the same time, the system could receive the photo and create its own descriptive information, and then check this against the user-submitted information. Such a process allows the system to make and check predictions within-submission, instead of relying on multiple reports of the same content.
A bit further down the road, integrating passive sensing into the pathVu system would allow even more relevant data to be collected, without needing the user to even lift a finger. Currently location services have a margin of error of about two to five meters, and this is fine for general navigation purposes. When these services have a margin of error of a few inches, the resulting location data can be reliably used to make more detailed inferences. For instance, pathVu would be able to reliably detect if a user entered a building from a location other than the main entrance; this could then trigger a probe asking if the user entered an alternative entrance and why (e.g. there was a ramp at that entrance). In sum, this drastically decreases the amount of work needed from users to collect valuable current data for the pathVu ecosystem.