Scroll Top

ChatGPT’s maker launches a laughably poor tool to detect AI written text

WHY THIS MATTERS IN BRIEF

Many companies claim they can detect AI written text, but even the world’s most famous generative AI company is bad at it …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

OpenAI, the research laboratory behind viral AI program ChatGPT, has released a tool designed to detect whether text has been written by Artificial Intelligence (AI), but warns it’s not completely reliable – yet.

 

See also
Microsoft unveils the world's first analogue computer to solve big problems

 

In a blog post on Tuesday, OpenAI linked to a new classifier tool that has been trained to distinguish between text written by a human and that written by a variety of AI, not just ChatGPT.

Open AI researchers said that while it was “impossible to reliably detect all AI-written text,” good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for “academic dishonesty” and when AI chatbots were positioned as humans, they said.

 

The Future of AI, by keynote speaker Matthew Griffin

 

But they admitted the classifier “is not fully reliable” and only correctly identified 26% of AI-written English texts. It also incorrectly labelled human-written texts as probably written by AI tools 9% of the time.

 

See also
ChatGPT aces UK GCSE History exams to astound teachers and create dilemmas

 

“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.”

 

We used chatGPT to write a book!

 

Since ChatGPT was opened up to public access, it has sparked a wave of concern among educational institutions and research journals across the world that it could lead to cheating in exams or assessments.

Lecturers in the UK are being urged to review the way in which their courses were assessed, while some universities have banned the technology entirely and returned to pen-and-paper exams to stop students using AI.

 

See also
Amazon continues to branch out with Augmented Reality

 

One lecturer at Australia’s Deakin University said around one in five of the assessments she was marking over the Australian summer period had used AI assistance.

A number of science journals have also banned the use of ChatGPT in text for papers.

OpenAI said the classifier tool had several limitations, including its unreliability on text below 1,000 characters, as well as the misidentification of some human-written text as AI-written. The researchers also said it should only be used for English text, as it performs “significantly worse” in other languages, and is unreliable on checking code.

“It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text,” OpenAI said.

 

See also
AI is coming to help Hollywood pick future blockbusters

 

OpenAI has now called upon educational institutions to share their experiences with the use of ChatGPT in classrooms. While most have responded to AI with bans, some have embraced the AI wave. The three main universities in South Australia last month updated their policies to say AI like ChatGPT is allowed to be used so long as it is disclosed.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This