User Perceptions of Intelligent Systems

- 2 mins
Alt Text
The E-meter accurately predicting the mood of a user writing a diary entry in real time. Transparent highlighting shows the contribution of each word towards the overall score.

The E-meter, take what you will from the name, is an application designed to better understand how people understand and build mental models of intelligent systems based on machine learning. Users are asked to write about emotional experiences that they have had in the past week. While they are writing the E-meter judges the mood of their entry from positive to negative. Users then complete a survey designed to examine the biases they experienced while using the E-meter, their trust and perceptions of accuracy, and other questions targeting user experience.

What did we find?

Our first experiment used an E-meter that was not even powered by machine learning. It was simply moving randomly as users typed words into the text box. Surprisingly, a lot of users were fooled by this. Many users could not even tell that the E-meter was moving randomly. The users experienced biases that led to them believing the system even when it was wrong. Most concerning, some users seemed to believe that they system was a more accurate judge of the users’ emotions than the users themselves. We need to be careful how we design intelligent systems in order to counteract these biases users are prone to.

Our second experiment examined the E-meter in the context of real-usage. We developed a regression model that predicted mood of the users’ entries from their writing. We split out users into 2 conditions. First, a transparent condition, where users saw each word they wrote highlighted in a way that showed whether it contributed positively or negatively to their overall mood score. Second, an opaque condition where users did not see this highlighting. We found a complex interaction between transparency and users’ perceptions of accuracy. When the E-meter predicted accurately for a user, transparency negatively affected the user’s perception of the E-meter’s accuracy. When the E-meter was very inaccurate, predicting the opposite of what a user felt, transparency increased the users’ perceptions of E-meter accuracy. In human to human interactions we see a similar phenomenon, people often only desire explanation when the actions another person took are unexpected.

Aaron Springer

Aaron Springer

Human-Centered AI Researcher, Quantitative UX Researcher at Google

rss facebook twitter github youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora