OpenAI has introduced an online service that can recognize text authored by a neural network. Ironically, it also helps to “calculate” the creations of the ChatGPT bot, created by the same developers.
Some schools in the US have already banned the use of ChatGPT for homework, and Stack Overflow has begun blocking users who flood the forum with tips generated by the neural network. In this regard, OpenAI has developed a new algorithm that determines the probability of text creation not only by their own neural network, but also by third-party AI solutions.
At the moment, the virtual “censor” is not too advanced – it correctly determines the generated text only in 26% of cases, says 4PDA. However, OpenAI claims that when used in tandem with other methods, AI can be useful in preventing abuse of text generators. After analysis, the service gives the text an assessment of “artificiality” in the following gradations:
- very unlikely (less than 10% chance of using AI),
- unlikely (10-45%),
- unclear (45-90%),
- possible (90-98%),
- probably (more than 98%).
NIX Solutions notes that the OpenAI classifier was trained on texts from 34 systems from five different organizations, including OpenAI itself. Articles from Wikipedia and other similar sites were used as a standard for “human” texts.