Scroll Top

Research shows GPT4 becomes 30 percent better when it critiques itself


By getting AI to critique itself researchers have not only given it a vital human skill but found new ways to improve AI’s without having to recode or redevelop them.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

Even if the unlikely six-month moratorium on Artificial Intelligence (AI) development asked for by Elon Musk and the Life Foundation goes ahead, it seems GPT4 has the capability for huge leaps forward if it just takes a good hard look at itself after researchers asked it to critique its own work and saw a 30% performance boost.


See also
Google tests AI's natural killer instincts, discovers society must be very, very careful


“It’s not everyday that humans develop novel techniques to achieve state-of-the-art standards using decision-making processes once thought to be unique to human intelligence,” wrote researchers Noah Shinn and Ashwin Gopinath. “But, that’s exactly what we did.”


The Future of AI, by keynote Matthew Griffin


The “Reflexion” technique takes GPT4’s already impressive ability to perform various tests, and introduces “a framework that allows AI agents to emulate human-like self-reflection and evaluate its performance.” Effectively, it introduces extra steps in which GPT4 designs tests to critique its own answers, looking for errors and missteps, then rewrites its solutions based on what it’s found.

The team used its technique against a few different performance tests. In the HumanEval test, which consists of 164 Python programming problems the model has never seen, GPT4 scored a record 67%, but with the Reflexion technique, its score jumped to a very impressive 88%.


See also
DeepFakes are the hot new corporate communications tool as companies dive in


In the Alfworld test, which challenges an AI’s ability to make decisions and solve multi-step tasks by executing several different allowable actions in a variety of interactive environments, the Reflexion technique boosted GPT4’s performance from around 73% to a near-perfect 97%, failing on only 4 out of 134 tasks.

In another test called HotPotQA, the language model was given access to Wikipedia, and then given 100 out of a possible 13,000 question/answer pairs that “challenge agents to parse content and reason over several supporting documents.” In this test, GPT4 scored just 34% accuracy, but GPT4 with Reflexion managed to do significantly better with 54%.

More and more often, the solution to AI problems appears to be more AI. In some ways, this feels a little like a Generative Adversarial Network (GAN), in which two AIs hone each other’s skills, one trying to generate images, for example, that can’t be distinguished from “real” images, and the other trying to tell the fake ones from the real ones. But in this case, GPT is both the writer and the editor, working to improve its own output.


See also
Mercedes adds ChatGPT to its infotainment system


The paper is available at Arxiv.

Source: Nano Thoughts via AI Explained

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This