Visuäly is a set of visualization techniques that enables users to explore historical data of bus conditions. I design and evaluate the prototype, then compare it with an existing system. First, I did a preliminary survey to assess the existing system. Then, I did four design iterations, each consisting of moderated remote usability testing session. Finally, I held a survey to compare both systems. Participants understand and enjoy interacting with the proposed solution, and also prefer it to the existing system.
Huge amount of information generated by the Internet of Things will provide challenges and opportunities for information visualization to presents the knowledge. One of the examples is the usage of sensors in the buses of Tampere Public Transport. They are all now equipped with GPS receivers and sensors on their engines to monitor data such as fuel consumption, engine temperature, and speed.
The information provided by the sensors posed interesting challenges to create meaningful visualization. My goal is to present and evaluate a set of novel visualization techniques, then compare them with existing visualization by subjective measurements. I designed and evaluated the prototype to answer these questions:
- Is the proposed visualization easy to understand and learn?
- How does the proposed solution as perceived by human subjects response compared to the existing system?
- How can we implement Human-Centered Design process in interactive visualization development?
The main goal of the visualization techniques is to present the historical data of bus condition. Thus, it is natural to think that the bus company as the main user group. However, I also consider residents to increase public participation in urban ecosystem development. [EU Comission]
It is a challenging task to fulfill the needs of the bus company and residents at the same time, since they may have different goals and purposes when interacting with the visualization. Therefore, I tried to find a balance when presenting the visualization. It has to provide a general information that can be understood by residents in interesting format. At the same time, it provides the option for the experts to explore the information in details.
There are three main phases of design iteration: insight generation, solution design, and comparison with existing evaluation. The first phase is done to get user feedback from the existing visualization. It is also done to generate more ideas for designing the solution. There are four iterations in the second phase. Each iteration consisted of designing and evaluating a prototype with remote usability testing. Finally, the result of the iteration, in form of a prototype, was compared with existing visualization.
|Insight||1st iteration||2nd iteration||3rd iteration||4th iteration||Static version||Existing vis.|
|Methods||survey, informal usability test||usability test, interview||usability test, interview||usability test, interview, questionnaire||usability test, interview, questionnaire||survey||survey|
|Participants||52 + 1||3||8||6||5||5||54|
|Data analysis||qualitative||qualitative, quantitative||qualitative, quantitative||qualitative, quantitative||qualitative, quantitative||qualitative||qualitative|
I held an online survey and asked the respondents whether they understood the purpose of the service and which machine condition parameters were important to them. In addition, I held an informal usability session with a participant in Tampere. It was done to get more understanding of how the participant used the existing service.
Sample of online campaigns
Following the insight generation phase, I moved to designing the solution. I modified and improved the Design Sprint according to my resources and needs. While it is a great method to test the initial idea using static pictures and transitions, it is not that useful if we wanted to evaluate the interaction techniques. That is why I incorporated more prototype building activities and modified the sprint duration. It is modified to be ten days for each sprint due to the previous reason and constraint on resources.
The modified methods for each iteration are described as follows. First, I sent out the invitation to join online usability testing by mail and social media. The invitation acted as a screening questionnaire, to enable filtering the participant by certain criteria.
Then, I defined set of personas, epics, user flows, visualization tasks, and possible usage scenarios. They were used as a guidance when I started to design the prototype. After that, I sketched the idea on paper to be self-evaluated. Then, I used SketchApp and Axure to create the interactive prototypes. At the same time, I screened and created a usability testing appointment for the participant.
I used post-it notes to laid out the epics
After that, I did remote testing sessions at the end of the week with Google Hangouts on Air (GHOA). I designed the usability tasks beforehand, based on the visualization tasks and goals. In the session, I asked the participant to do tasks while thinking aloud. Then, I asked them to fill in a questionnaire and did the interview. I used System Usability Scale (SUS) questionnaire.
Prototype of the first iteration
Finally, I analyzed results for all the sessions by measuring task completion rate and time. I also transcribed the conversation in the usability session and interview to capture more accurate user feedback. The results were used as recommendations for the next iteration.
Prototype of the second iteration
I did four iterations. Each iteration differs with its own goal, task, and prototype. The first iteration was focused on the acceptance of the general idea, whether the participant understood the purpose and got the picture of the visualization. The second iteration resulted in complete prototype redesign from the previous iteration.
After two iterations, I started to validate the ideas and features. Therefore, I could do more exploration on the visualization techniques and collected more qualitative information. To achieve that, I slightly modified the tasks and interview questions. I also introduced the SUS questionnaire starting from the third iteration.
Prototype of the third iteration
The fourth iteration concluded the design process. Different from the previous iterations, the prototype was built based on the prototype on the third iteration. It enabled me to experiment with more detailed interaction techniques.
I conducted an online survey to compare the proposed solution with the existing visualization. The procedure and method was generally the same as with the online survey in insight generation. However, there were some additions and changes on the questionnaire. It was done because the proposed solution used a different perspective than the existing visualization. To find whether the interactivity brings any additional value to the proposed solution, I also compared the interactive version with static images.
In the fourth iteration, all tasks are successfully done by all the participants with variability on completion time. Landing and Dashboard page are understood correctly by all of the participants. Most of the participants got the idea on how to compare all of the bus line condition. They were all able to find the time when the bus is having unusual condition.
More than half of the participants (31 from 54) had not heard of Älynysse, while 13 participants had tried it, whether tried the existing visualization, did the previous survey, or participated on the usability test. On the other hand, 10 participants had heard, but had not tried Älynysse.
The proposed solution was rated 3.83 on 5 point scale in which 1 means bad and 5 means very good. The rating is higher from the result of the existing visualization. However, it is worth to mention that only one participant rated 1, and one participant rated 2. This differed from the existing visualization result which two participants rated 2 and six participants rated 1.
Visuäly screens and modules. (a) Landing page and dashboard screen. (b) Bus line states: inactive, hovered on the map module, hovered and clicked on the daily condition module. (c) Overall statistic module states: hovered and clicked. (d) Another variation of clicked state in overall statistic module, for the ‘total’ category. (e) Expanded state of CO2 emissions module.
Based on the analysis and results of all the design iterations, the proposed solution is easy to be understood and learned. Participants’ feedback from the usability testing sessions were positive. Participants’ performance in time and completion rate were great, even though they were not the municipality or transportation manager.
Participants preferred the proposed solution. The comparison survey shows that the proposed solution has higher satisfaction rating than the existing solution. Detailed analysis of their response in the insight generation survey reveal the fact that most of them were confused about the existing solution. On the other hand, most feedback were also positive in the comparison survey.
The implementation of Human-Centered Design process yielded in positive results for the design iteration. Recommendations from the previous iteration could be executed in a fast manner, due to the use of the interactive prototype. The design sprint also enabled us to test more design options compared to the traditional development cycle.