WHY THIS MATTERS IN BRIEF
As AI gets better at understanding trust and how to get people to trust it inevitably we’re going to find ourselves scammed by it and much more …
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Artificial Intelligence (AI) has made great strides in the past few years, even months. New research in the journal Management Science finds that AI agents can build trust – like that of humans.
“Human-like trust and trustworthy behavior of AI can emerge from a pure trial-and-error learning process, and the conditions for AI to develop trust are similar to those enabling human beings to develop trust,” says Yan (Diana) Wu of San Jose State University. “Discovering AI’s ability to mimic human trust behavior purely through self-learning processes mirrors conditions fostering trust in humans.”
The Future of AI, by Keynote Matthew Griffin
Wu, with co-authors Jason Xianghua Wu of the University of New South Wales, UNSW Business School, Kay Yut Chen of The University of Texas at Arlington and Lei Hua of The University of Texas at Tyler, say it’s not just about AI learning to play a game; it’s a significant stride toward creating intelligent systems that can cultivate AI social intelligence and trust through pure self-learning interaction.
The paper, “Building Socially Intelligent AI Systems: Evidence from the Trust Game using Artificial Agents with Deep Learning,” constitutes a first step to build multi-agent-based decision support systems in which interacting artificial agents can leverage social intelligence to achieve better outcomes.
“Our research breaks new ground by demonstrating that AI agents can autonomously develop trust and trustworthiness strategies akin to humans in economic exchange scenarios,” says Chen.
The authors explain that contrasting AI agents with human decisionmakers could help deepen knowledge of AI behaviors in different social contexts.
“Since social behaviors of AI agents can be endogenously determined through interactive learning, it may also provide a new tool for us to explore learning behaviors in response to the need for cooperation under specific decision-making scenarios,” concludes Hua.