OpenAI Shuts Down Tool for Detecting AI-Written Text Due to Inaccuracy

OpenAI has decided to shut down its AI classifier tool, which was designed to identify AI-generated text, due to its low accuracy rate. The company made this announcement in an updated blog post, stating that it is incorporating feedback and researching more effective techniques for determining the origin of text.

Although OpenAI plans to develop mechanisms for detecting AI-generated audio and visual content, it has not provided specific details about these mechanisms.

OpenAI admitted that the classifier was never highly effective in identifying AI-generated text and even had the potential for false positives, where human-written text could be mistakenly labeled as AI-generated. Initially, OpenAI believed that the classifier could improve with more data, but it ultimately decided to discontinue it.

After the success of OpenAI’s ChatGPT, concerns arose regarding AI-generated text and its impact on education. Educators worried that students would rely on AI to complete their assignments instead of studying. In response to these concerns, some schools in New York banned access to ChatGPT on their premises to maintain academic integrity.

The issue of misinformation through AI-generated text has also emerged, with studies suggesting that AI-generated tweets can be more persuasive than those written by humans. Currently, governments are struggling to regulate AI, leaving it to organizations and groups to establish their own rules and protective measures.

Differentiating between AI and human work is becoming increasingly difficult, and even OpenAI, as a pioneering company in the generative AI field, does not have all the answers. Additionally, OpenAI has faced challenges with its trust and safety leadership and is being investigated by the Federal Trade Commission regarding its information and data vetting procedures.

OpenAI has chosen not to provide further comments beyond its blog post.