Scroll Top

IEEE publishes the worlds first framework for coding ethical behaviours into AI

article_aiethics

WHY THIS MATTERS IN BRIEF

AI is rapidly being embedded into the worlds digital fabric but these systems are black boxes whose behaviours and decision making processes flummox even their designers, and with no ethical codes to follow many people worry about the negative consequences that these advanced systems could have on business, culture and society.

 

Artificial intelligence (AI) systems are increasingly being seen as black boxes and while institutions such as MIT are trying to address the problem there’s another, possibly bigger problem looming… a lack of AI ethics and governance standards. According to the world’s largest technical professional association, the IEEE Standard Association, who are responsible for setting and governing many of the technology standards we use today, one of the biggest barriers standing in the way of building ethically centered artificial intelligence (AI) systems that would benefit humanity, and avoid the pitfalls of embedded algorithmic biases, is the tech industry’s lack of ownership and responsibility for ethics.

 

See also
New all seeing cameras will revolutionise computer vision as we know it

 

A couple of days ago the IEEE published the first draft of a new framework that they’re hoping will help guide the technology industry and help them to build ethical, benevolent and beneficial AI systems.

The document, called Ethically Aligned Design, includes a series of detailed recommendations based on the input of more than 100 AI and non-AI thought leaders working in academia, big business and government and covers areas including AI ethics, law, philosophy and policy. IEEE are hoping that this living document, as they call it, will become the defacto reference for everyone working in the AI field – irrespective of the industry they sit in, or the purpose for which they’re designing their new AI systems, whether they’re using AI to augment autonomous vehicles or whether they’re using it to optimise energy distribution. And in the spirit of collaboration and cooperation the organisation is also inviting feedback before a cut off date of march 6th 2017 via their Global Initiative’s website which also states that all comments will be made publicly available.

In time it’s hoped that the initiative will take on a life of its own and that interested parties will come together to create, proof and validate a new set of IEEE Standards based on its notion of Ethically Aligned Design.

“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” said Konstantinos Karachalios, managing director for IEEE Standard Association.

The 136-page document is divided into a series of sections, starting with some general principles, such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable, before moving onto more specific areas such as how to embed relevant “human norms or values” into systems, and tackle potential biases, achieve trust and enable external evaluating of value alignment.

Another section meanwhile considers methodologies to guide ethical research and design – and here the tech industry’s lack of ownership or responsibility for ethics is flagged as a problem, along with other issues, such as ethics not being routinely included within tech degree programs. The IEEE also notes the lack of an independent review organization to oversee algorithmic operation, and the use of “black-box components” in the creation of algorithms, as other problems to achieving ethical AI.

 

See also
Google's new AI can build AI's that eclipse those created by human experts

 

One suggestion to help overcome the tech industry’s ethical blind spots is to ensure those building autonomous technologies are “a multidisciplinary and diverse group of individuals” so that all potential ethical issues are covered, the IEEE writes.

It also argues for the creation of standards providing “oversight of the manufacturing process of intelligent and autonomous technologies” in order to ensure end users are not harmed by autonomous outcomes, and for the creation of “an independent, internationally coordinated body” to oversee whether products meet ethical criteria – both at the point of launch, and thereafter as they evolve and interact with other products, and morph.

“When systems are built that could impact the safety or wellbeing of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black-box software and implement mitigation strategies where possible,” the IEEE writes, “technologists should be able to characterise what their algorithms or systems are going to do via transparent and traceable standards. To the degree that we can, it should be predictive, but given the nature of AI systems it might need to be more retrospective and mitigation oriented.

“Similar to the idea of a flight data recorder in the field of aviation, this algorithmic traceability can provide insights on what computations led to specific results ending up in questionable or dangerous behaviours. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.”

Ultimately, it concludes that engineers should deploy black-box software services or components “only with extraordinary caution and ethical care,” given the opacity of their decision making process and the difficulty in inspecting or validating these results.

Another section of the document, on safety and beneficence of artificial general intelligence, also warns that as AI systems become more capable “unanticipated or unintended behaviour becomes increasingly likely and dangerous,” while retrofitting safety into any more generally capable, future AI systems may be difficult.

“Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems,” it suggests.

The document also touches on concerns about the asymmetry inherent in AI systems that are fed by individuals’ personal data and touches on the fact that in many cases, whether it’s at a societal, or a regional level most of the outcomes aren’t equally distributed, which, over the long term could lead to AI “haves” and AI “have nots”.

 

See also
Google Translate has made up its own secret language

 

“The artificial intelligence and autonomous systems (AI/AS) driving the algorithmic economy have widespread access to our data, yet we remain isolated from gains we could obtain from the insights derived from our lives,” it writes, “to address this asymmetry there is a fundamental need for people to define, access, and manage their personal data as curators of their unique identity. New parameters must also be created regarding what information is gathered about individuals at the point of data collection. Future informed consent should be predicated on limited and specific exchange of data versus long-term sacrifice of informational assets.”

The issue of AI ethics and accountability has been rising up the social and political agenda this year, fuelled in part by high profile algorithmic failures such as Facebook’s inability to filter out fake news during the recent US election and the White House also commissioned and published its own report into AI which worryingly concluded that trying to estimate the impact of “advanced” and “super” AI’s was just to hard to even begin to address.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This