Scroll Top

A new algorithm helps machines sense and respond to human trust

WHY THIS MATTERS IN BRIEF

As humans and machines start working closely together researchers think we need to create machines that we can trust, but simulating trust might not be the right answer.

 

As robots and other machines become more entwined with our daily lives, the field of improving human-machine interactions is growing, with Soul Machines in New Zealand being one example of trying to create the first “digital humans” who do everything from help customers select mortgages to teach school kids about energy in friendly and intuitive ways. Now new work in the human-machine interaction field has produced a range of new “AI Classification Models” that show precisely how well humans trust the “intelligent” machines they collaborate with, and the team behind them believe he models will go a long way to helping improve the quality of interactions and teamwork – something that will be especially pertinent as more companies tag team their employees with “AI assistants” as part of the Future of Work.

 

See also
Two breakthroughs make it impossible to fake nuclear weapon decommissioning

 

The recent work by assistant professor Neera Jain and associate professor Tahira Reid, from Purdue University, is just one step in the overall goal of designing intelligent machines capable of changing their behaviour to enhance their human teammates’ trust in them. However, from my perspective in human terms this could be akin to people faking their relationships with one another to get along – something that the majority of people wouldn’t be that comfortable with in the first place. But, today it seems that anything goes in the digital world we live in, so I’m not surprised and, obviously, it has its pros and cons which at the end of the day you can argue is just a bunch of semantics.

 

Learn more about the algorithm
 

“Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans,” Jain said. “As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.”

This will improve the efficiency of human and machine interactions. Currently, distrust in machines can result in system breakdowns. Purdue University gives the example of aircraft pilots and industrial workers who are routinely interacting with automated systems but may override the system if they think that the system is faltering.

“It is well established that human trust is central to successful interactions between humans and machines,” Reid said.

 

See also
Blockchain startups are lining up to decentralise and revolutionise the internet

 

The researchers have developed two types of “classifier-based empirical trust sensor models,” which puts them one step closer to improving the relationship between human and intelligent machines, where the model gathers data from the human subjects in two ways to gauge “trust.”

It monitors brainwave patterns, but also measures changes in the electrical characteristics of the skin, providing psychophysiological “feature sets” correlated with trust. To complete the study 45 human subjects wore EEG headsets and a device on their hand to measure galvanic skin response.

One model uses the same set of psychophysiological features for all 45 participants, while the other is tailored to the individual. The latter model improves accuracy but takes a lot more time in training, and the two models had a mean accuracy of 71 percent, and 79 percent, respectively. It’s the first time that EEG was used to gather data related to trust in real time.

“We are using this data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event. We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship.”

 

See also
The US Army is training to take down real life hunter killer Terminator robots

 

“In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this. A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor [or sensor system] that will enable machines to estimate human trust level in real time,” Jain continued.

The study has been published in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems, and the journal’s special issue is titled “Trust and Influence in Intelligent Human-Machine Interaction.”

 

Source: Purdue University

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This