• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A method with roots in AI uncovers how humans make choices in groups and social media

November 27, 2019

A method with roots in AI uncovers how humans make choices in groups and social media

by University of Washington

https://phys.org/news/2019-11-method-roots-ai-uncovers-humans.html

The choices we make in large group settings—such as in online forums and social media—might seem fairly automatic to us. But our decision-making process is more complicated than we know. So, researchers have been working to understand what's behind that seemingly intuitive process.

Now, new University of Washington research has discovered that in large groups of essentially anonymous members, people make choices based on a model of the "mind of the group" and an evolving simulation of how a choice will affect that theorized mind.

Using a mathematical framework with roots in artificial intelligence and robotics, UW researchers were able to uncover the process for how a person makes choices in groups. And, they also found they were able to predict a person's choice more often than more traditional descriptive methods. The results were published Wednesday, Nov. 27, in Science Advances.

...

Given that the model provides a quantitative explanation for human behavior, Rao wondered if it may be useful when building machines that interact with humans.

"In scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI," he said. "A machine that simulates the 'mind of a group' and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans."


Re: “"A machine that simulates the ‘mind of a group’ and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.”“
Why would an AI designed and built by humans not already be inherently human friendly? Are AI designers and builders inhuman?

Comments

There are no comments to display.

Blog entry information

Author
southwestforests
Read time
1 min read
Views
453
Last update

More entries in User Blogs

More entries from southwestforests