Scroll Top

US Presidential report on AI tries to prepare society for what’s coming

article_superai

WHY THIS MATTERS IN BRIEF

US Government report lays out guidance for AI use and regulation and puts regulating super AI’s in the too hard bucket.

 

Artificial Intelligence (AI) research and development is starting to reach critical mass and new breakthroughs are being announced almost every day. Now a new report from the US Office of Science Technology Policy (OSTP), who advises Barak Obama directly on AI matters has prepared a new report on the technology which they see is increasingly poised to reshape the way we live and work.

Titled Preparing for the Future of Artificial Intelligence the report makes 23 policy recommendations on a number of topics concerned with the best way to harness the power of machine learning and algorithm driven intelligence for the benefit of society.

 

See also
Chinese workers are having their emotions monitored by "mind reading" tech

 

The OSTP position is that government has several roles to play in driving the direction of AI.

Namely, “It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It should support basic research and the application of AI to public goods, as well as the development of a skilled, diverse workforce. And government should use AI itself, to serve the public faster, more effectively, and at lower cost.”

The report makes the distinction between narrow AI – which addresses specific application areas such as playing strategic games, language translation, autonomous vehicles, and image recognition – and general AI – a notional future AI system that exhibits apparently intelligent behaviour at least as advanced as a person across the full range of cognitive tasks.

 

See also
AI can now create new and powerful designer proteins in just minutes

 

Prominent voices, including those of Elon Musk and Stephen Hawking, have expressed concern about the potential dangers of Artificial General Intelligence (AGI), but the authors of the report don’t share that viewpoint:

“People have long speculated on the implications of computers becoming more intelligent than humans. Some predict that a sufficiently intelligent AI could be tasked with developing even better, more intelligent systems, and that these in turn could be used to create systems with yet greater intelligence, and so on, leading in principle to an “intelligence explosion” or “singularity” in which machines quickly race far ahead of humans in intelligence.
 
In a dystopian vision of this process, these super-intelligent machines would exceed the ability of humanity to understand or control. If computers could exert control over many critical systems, the result could be havoc, with humans no longer in control of their destiny at best and extinct at worst. This scenario has long been the subject of science fiction stories, and recent pronouncements from some influential industry leaders have highlighted these fears.
 
A more positive view of the future held by many researchers sees instead the development of intelligent systems that work well as helpers, assistants, trainers, and teammates of humans, and are designed to operate safely and ethically.”
See also
Europe's new Artificial Intelligence Act can demand AI models are retrained and deleted

So the focus of this report therefore is on narrow AI and its implications, the NSTC Committee on Technology having decided that “the long-term concerns about super-intelligent general AI should have little impact on current policy.”

“Advances in AI technology have opened up new markets and new opportunities for progress in critical areas such as health, education, energy, and the environment,” says John Holdren, assistant to the President for Science and Technology and director, Office of Science and Technology Policy, and Megan Smith, US Chief Technology Officer, in a letter introducing the report.

They continue, “In recent years, machines have surpassed humans in the performance of certain specific tasks, such as some aspects of image recognition. Experts forecast that rapid progress in the field of specialized AI will continue. Although it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.”

The report might not address the threats of a hostile AI, for which Google is trying to create a kill switch, but identifying and minimizing risk is a key objective and a recurring theme in the reports’ seven topic sections – Applications of AI for Public Good; AI and Regulation; Research and Workforce; Economic Impacts of AI; Fairness, Safety, and Governance; Global Considerations and Security; and Preparing for the Future.

 

See also
Uber's fatal self-driving car crash reportedly mischaracterised pedestrian as a bag

 

“As AI technologies move toward broader deployment, technical experts, policy analysts, and ethicists have raised concerns about unintended consequences of widespread adoption,” the authors write.

Further, “Use of AI to make consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes, leads to concerns about how to ensure justice, fairness, and accountability.”

On this matter, the authors posit that transparency is needed around algorithms and data and the process of AI decision-making.

This if followed by a dose of common sense:

“Ethics can help practitioners understand their responsibilities to all stakeholders, but ethical training should be augmented with technical tools and methods for putting good intentions into practice by doing the technical work needed to prevent unacceptable outcomes.”

As an example, when it comes to safely transitioning AI tech from the lab to the open world, the authors note that “Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners.”

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This