We employed a range of user research methodologies split into 2 main parts– the first to gather quantitative data for the kiosk to observe its frequency of use and physical visibility, and the second to gather qualitative data for the interface itself.
Through guerrilla usability tests, user interviews, heuristic analyses and observation of the kiosks, some of the major insights we gathered were:
- An overwhelming number of Digital Displays at the airport and,
- Scarcity of kiosks, resulting in low participation for the on-site touchpoints
- Difficulty in locating kiosks– some were not very visible in large areas.
- Lack of affordance– no strong Call To Action on the kiosk display
- Issues in the dated interface design
By conducting interviews with the customer feedback team, we noted the interface language used– it was more catered to the back-end instead of to the user (taking into account the user’s emotional state and purpose of giving feedback), resulting in inaccurate category selection which led to a lack of detailed information for the routing team.
There was also a lack in situational context, and the feedback team needed to develop empathy in the users’ experiential journey leading up to the point of giving positive/negative feedback, in order to have a more accurate picture and improve on their service in that area.
Because there were many instances and prior narratives in which users would give feedback, we developed a Persona Spectrum overlaid over a prioritisation matrix to classify them into different levels based on importance.