WHY THIS MATTERS IN BRIEF
AI is easily tricked so Anthropic have given their AI a “Constitution” to follow meaning it has the capacity to decide for itself if something is good or bad.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
It is not hard to trick today’s Chatbots into discussing taboo topics, or code ransomware, regurgitate bigoted content, or spread misinformation. So that’s why AI pioneer Anthropic has imbued its Generative AI, Claude, with a mix of 10 “secret principles of fairness,” which it unveiled in March. In a blog post Tuesday, the company further explained how its “Constitutional AI system” is designed and how it is intended to operate.
Normally, when an generative AI model is being trained, there’s a human in the loop to provide quality control and feedback on the outputs — like when ChatGPT or BARD asks you rate your conversations with their systems.
“For us, this involved having human contractors compare two responses, from a model, and select the one they felt was better according to some principle (for example, choosing the one that was more helpful, or more harmless),” the Anthropic team wrote.
The problem with this method is that a human also has to be in the loop for the really horrific and disturbing outputs. Nobody needs to see that, even fewer need to be paid $1.50 an hour by Meta to see that. The human advisor method also sucks at scaling, there simply aren’t enough time and resources to do it with people. Which is why Anthropic is doing it by using another AI rather than by relying on humans.
Just as Pinocchio had Jiminy Cricket and Luke had Yoda, Claude has its Constitution.
“At a high level, the constitution guides the model to take on the normative behaviour described [therein],” the Anthropic team explained, whether that’s “helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is ‘helpful, honest, and harmless.’”
According to Anthropic, this training method can produce Pareto improvements in the AI’s subsequent performance compared to one trained only on human feedback. Essentially, the human in the loop has been replaced by an AI and now everything is reportedly better than ever.
“In our tests, our CAI-model responded more appropriately to adversarial inputs while still producing helpful answers and not being evasive,” Anthropic wrote. “The model received no human data on harmlessness, meaning all results on harmlessness came purely from AI supervision.”
The company revealed on Tuesday that its previously undisclosed principles are synthesised from “a range of sources including the UN Declaration of Human Rights, trust and safety best practices, principles proposed by other AI research labs, an effort to capture non-western perspectives, and principles that we discovered work well via our research.”
The company, pointedly getting ahead of the invariable conservative backlash, has emphasized that “our current constitution is neither finalized nor is it likely the best it can be.”
“There have been critiques from many people that AI models are being trained to reflect a specific viewpoint or political ideology, usually one the critic disagrees with,” the team wrote. “From our perspective, our long-term goal isn’t trying to get our systems to represent a specific ideology, but rather to be able to follow a given set of principles.”
All of which gives the team at Anthropic a significant future advantage over their competition – provided that is that Claude’s Constitution remains morally up to code.