Friday, December 27, 2024
spot_img
HomeTechnologyAI & Machine LearningNew research suggests artificial intelligence agents can develop trust similar to that...

New research suggests artificial intelligence agents can develop trust similar to that of humans

Credit: cottonbro studio from Pexels

Artificial intelligence (AI) has made great strides in the past few years, even months. New research in the journal Management Science finds that AI agents can build trust—like that of humans.

“Human-like trust and trustworthy behavior of AI can emerge from a pure trial-and-error learning process, and the conditions for AI to develop trust are similar to those enabling human beings to develop trust,” says Yan (Diana) Wu of San Jose State University. “Discovering AI’s ability to mimic human trust behavior purely through self-learning processes mirrors conditions fostering trust in humans.”

Wu, with co-authors Jason Xianghua Wu of the University of New South Wales, UNSW Business School, Kay Yut Chen of The University of Texas at Arlington and Lei Hua of The University of Texas at Tyler, say it’s not just about AI learning to play a game; it’s a significant stride toward creating intelligent systems that can cultivate social intelligence and trust through pure self-learning interaction.

The paper, “Building Socially Intelligent AI Systems: Evidence from the Trust Game using Artificial Agents with Deep Learning,” constitutes a first step to build multi-agent-based decision support systems in which interacting artificial agents can leverage social intelligence to achieve better outcomes.

“Our research breaks new ground by demonstrating that AI agents can autonomously develop trust and trustworthiness strategies akin to humans in economic exchange scenarios,” says Chen.

The authors explain that contrasting AI agents with human decisionmakers could help deepen knowledge of AI behaviors in different social contexts.

“Since social behaviors of AI agents can be endogenously determined through interactive learning, it may also provide a new tool for us to explore learning behaviors in response to the need for cooperation under specific decision-making scenarios,” concludes Hua.

More information:
Jason Xianghua Wu et al, Building Socially Intelligent AI Systems: Evidence from the Trust Game Using Artificial Agents with Deep Learning, Management Science (2023). DOI: 10.1287/mnsc.2023.4782

Provided by
Institute for Operations Research and the Management Sciences

 

Post Disclaimer

The information provided in our posts or blogs are for educational and informative purposes only. We do not guarantee the accuracy, completeness or suitability of the information. We do not provide financial or investment advice. Readers should always seek professional advice before making any financial or investment decisions based on the information provided in our content. We will not be held responsible for any losses, damages or consequences that may arise from relying on the information provided in our content.

RELATED ARTICLES

Most Popular

Recent Comments

error: Content is protected !!