Scroll Top

Google’s democratic AI re-distributes wealth better than politicians

WHY THIS MATTERS IN BRIEF

There’s alot of conversation about the best way to reduce global wealth inequality and it looks like AI might be better at it than humans …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

It’s no secret that the overwhelming majority of wealth in the United States is concentrated at the very top, creating staggering levels of poverty and inequality that vastly outpace other supposedly “wealthy” nations. But while the current political system ensures that this upward extraction of wealth continues, Artificial Intelligence (AI) researchers have begun playing with a fascinating question: Is machine learning better equipped than humans to create a society that divides resources more equitably?

 

See also
Featured Futurist: Preparing for the Future, Carphone Warehouse

 

The answer, according to a recent paper published in Nature Human Behaviour from researchers at Google’s DeepMind, seems to be yes – at least as far as the study’s participants are concerned.

 

The Future of Artificial Intelligence, by Keynote Speaker Matthew Griffin

 

The paper describes a series of experiments where a deep neural network was tasked with divvying up resources in a more equitable way that humans preferred. The humans participated in an online economic game – called a “public goods game” in economics – where each round they would choose whether to keep a monetary endowment, or contribute a chosen amount of coins into a collective fund. These funds would then be returned to the players under three different redistribution schemes based on different human economic systems – and one additional scheme created entirely by the AI, called the Human Centered Redistribution Mechanism (HCRM). The humans would then vote to decide which system they preferred.

It turns out, the distribution scheme created by the AI was the one preferred by the majority of participants. While strict libertarian and egalitarian systems split the returns based on things like how much each player contributed, the AI’s system redistributed wealth in a way that specifically addressed the advantages and disadvantages players had at the start of the game – and ultimately won them over as the preferred method in a majoritarian vote.

 

See also
Soul Machines reveal the secrets behind BabyX, their life like artificial baby

 

“Pursuing a broadly liberal egalitarian policy, [HCRM] sought to reduce pre-existing income disparities by compensating players in proportion to their contribution relative to endowment,” the paper’s authors wrote. “In other words, rather than simply maximizing efficiency, the mechanism was progressive: it promoted enfranchisement of those who began the game at a wealth disadvantage, at the expense of those with higher initial endowment.”

The methods differ from a lot of AI projects, which focus on establishing an authoritative “ground truth” model of reality that is used to make decisions – and in doing so, firmly embeds the bias of its creators.

“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers wrote.

 

See also
World first as researchers 3D print capacitors straight into PCB’s

 

“Instead of imbuing our agents with purportedly human values a priori, and thus potentially biasing systems towards the preferences of AI researchers, we train them to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in a majoritarian election.”

Of course, we don’t need an AI to show us that more sustainable ways of living are possible. On a smaller scale, mutual aid and community organizations that redistribute resources have existed forever. So has scientific evidence showing that – contrary to the dogma of hyper-competitive capitalism – human beings are naturally predisposed toward cooperation, sharing, and collective prosperity.

While the AI’s system was preferred by human participants, that doesn’t necessarily mean it would equitably satisfy the needs of humans on a larger scale. The researchers are also quick to point out that the experiments are not a radical proposal for AI-based governance, but a framework for future research on how AI could intervene in public policy.

 

See also
US Presidential report on AI tries to prepare society for what's coming

 

“This is fundamental research asking questions about how an AI can be aligned with a whole group of humans and how to model and represent humans in simulations, explored in a toy domain,” Jan Balaguer, a DeepMind researcher who co-authored the paper, told Motherboard. “Many of the problems that humans face are not merely technological but require us to coordinate in society and in our economies for the greater good. For AI to be able to help, it needs to learn directly about human values.”

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This