Surely there’s no need to remind anyone how vital UX is when it comes to the success of an IT product. In the age of mass digitization, apps (especially web apps) have a very short time frame to capture the user’s attention. Creating a product that is not only intuitive and good-looking but can connect with the customer in this short time is key to recognizability and brand success. This is why for example Typeform designed their product around a conversational interface making them a commonly used tool for creating surveys. The other tool presenting a great UX is Notion, software for writing notes and organizing knowledge. Thanks to the feeling of freedom provided by intuitiveness and a very high level of customizability of pages, Notion greatly outperformed, in terms of design, tools like OneNote or Evernote.
In order to design a product that effectively captures a user’s attention and connects with them on the emotional level we first have to understand their behavior. Usability testing is the current standard method of gaining this insight by UX designers and researchers. Let’s examine the basics by reviewing the standard methods of testing and their current limitations. Then, we would like to present solutions to these problems by showing how AI methods can be applied to drive better usability testing. Finally, we introduce the term UX Mining which is about finding powerful insights from data gathered during usability testing.
Let’s start with the most commonly used method and a big fan favorite — moderated usability testing. A designer meets a tester (on-site or with a video conferencing tool like Zoom) and asks them to perform several tasks on an analyzed app/website. This interactive mode of testing allows for in-depth questioning and provides a comprehensive insight into the feelings, reactions and expectations of the tester. For this reason, this kind of test is usually performed at the explorative stages of a project. During each test, the designer takes a lot of notes writing opinions of testers and key findings. Every test carries a lot of qualitative data like facial expressions, gestures, or tone of voice which can generate valuable insights into the process of UX research. However, the relatively high dynamics of such tests mean that important insights may not be caught and properly noted. To tackle that issue, designers record testing sessions and rewatch them to manually detect and tag important reactions. And here, the main disadvantage of moderated testing rears its head. The supervision of such tests and later the analysis of recorded data can be very time-consuming. Inviting the right people, arranging meetings with them and the interviews can take up to several days. What is more, constructing the final report and sharing it with the team is also a very time-consuming process. It requires translating qualitative data, usually expressed in written statements or detected reactions, into quantitative ones.
Unmoderated testing solves the issue of the time-consuming nature of moderated testing at the expense of insightful data. These tests don’t require the presence of a designer and can be performed on a much larger scale. It is usually performed by dedicated software like Maze or UsabilityHub. Testers receive a link to an assessment where they have to follow some tasks and answer several survey questions. Even though automation seems to be a good solution, we lose a lot of insights and the true understanding of users’ behavior. A designer cannot ask in-depth questions and is limited with analysis to previously designed surveys.
The third and most advanced method is to conduct tests in so-called usability labs. They allow for in-depth analysis of users’ behavior, combining both qualitative and quantitative data. They often use state-of-the-art technology like biosensors (e.g. EEG, GSR), and thorough analysis of changes in the recorded signals, derive testers’ feelings or detecting problems in performing the given tasks. While we can surely see the advantages of using labs, they are very expensive solutions requiring a lot of resources and time to conduct them.
Data Mining is a term that refers to extracting knowledge from (usually) big data by applying machine learning algorithms. UX Mining should therefore refer to finding insights from usability testing. AI has already proven to work and help in various professions. Yet, there are not too many applications of AI developed in the UX industry. Taking into account all problems of usability testing, we found methods of applying AI to the data gathered during usability testing. Imagine an algorithm that could take the recorded video from a testing session and automatically tag it with the most important points like detected reactions, important opinions, or even automatically derive own conclusions or recommendations. Taking the current advancements in tech, this is surely feasible to implement. That is why we decided to set up a startup called uxmining that applies AI to usability testing, to help UX researchers and designers with their day-to-day work. In the long term, we are aiming at providing insights comparable to that of usability labs without its drawbacks (like the need for extra hardware). Through the use of AI, it is possible to come up with similar conclusions by simply analyzing recorded video by a webcam, screen sharing, and asking the right and adjusted questions in the remote setting. This way we provide a solution that is not only much faster than traditional methods but also contributes an extra layer of data (both qualitative and quantitative). What is more, our key priority is to make it affordable for every designer, in contrast to advanced studies in labs.
But how does it work? As already stated, our main focus is to provide designers with powerful insights from remote usability testing. We apply AI to analyze recorded videos both from moderated and unmoderated tests (in which testers execute experiments with cameras turned on). The first application of analyzing recorded video is to automatically detect the eye-movements and put gaze points on the shared screen so that the designer would know which elements the tester was looking at. By generating eye-tracking heatmaps, or simply observing how the gaze point is moving, allows us to derive additional conclusions. Observing the user’s behavior by analyzing the order in which they were looking at elements to complete tasks, allows for much faster and easier identification of components that affect the lack of intuitiveness of the application. For instance, because users were staring at them for too long, that can imply a lack of understanding of what to do next. On the other hand, we can identify graphical elements that are eye-catching and appealing to users. In the presented video, you can find out how it works in our product.
For now, we present eye-tracking as one of the components of our software. We will be presenting the next features allowing for finding further insights (such as the previously mentioned emotion detection) in the following articles. All updates will be available on our LinkedIn, so feel free to follow us there!