WHY THIS MATTERS IN BRIEF
Decades after AI became a “thing” no government has any real idea of how to regulate its development and that’s an issue.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, connect, watch a keynote, read our codexes, or browse my blog.
In the footsteps of the European Union who recently released their first Artificial Intelligence (AI) regulatory framework which could see companies valuable AI’s deleted if they fail to meet certain expectations this week the Biden administration unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.
The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fuelled world, officials said.
The Future of AI, by keynote speaker Matthew Griffin
“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”
The office said the white paper represents a major advance in the administration’s agenda to hold technology companies accountable, and highlighted various federal agencies’ commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft.
It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.
The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a historically black college or university.
The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment – something else that’s caused problems after several AI systems automatically declined benefits based on anything from health grounds to ethnicity.
Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP’s investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.
In May, sources said Carnegie Mellon University researchers and staffers from the American Civil Liberties Union spoke with OSTP officials about child welfare agencies’ use of algorithms. Nelson said protecting children from technology harms remains an area of concern.
“If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions,” said Nelson, who also serves as deputy assistant to President Joe Biden.
OSTP did not provide additional comment about the May meeting.
Still, because many AI-powered tools are developed, adopted or funded at the state and local level, the federal government has limited oversight regarding their use. The white paper makes no specific mention of how the Biden administration could influence specific policies at state or local levels, but a senior administration official said the administration was exploring how to align federal grants with AI guidance.
The white paper does not have power over tech companies that develop the tools nor does it include any new legislative proposals. Nelson said agencies would continue to use existing rules to prevent automated systems from unfairly disadvantaging people.
The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said.
Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.
“Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values,” the document said.