A college student has created an app to help us humans decipher whether text was written by a human or generated by OpenAI’s crazy new chatbot, ChatGPT.
Edward Tian, a computer science and journalism student at Princeton, says he created the program, which he dubs “GPTZero,” to help combat academic plagiarism generated by the new AI-powered chatbot.
ChatGPT, OpenAI’s new large language model bot, has been stunning audiences with its ability to spit out human-like text. Here at Gizmodo, we have used the program to do a number of things, including pen an entire science fiction story and write one of our blogs for us. The tech has impressed a lot of people — but it has also worried them. In particular, critics fear that the chatbot will potentially doom the college essay, lead to a swell in disinformation, and prove otherwise disruptive to major media industries.
Thus, Tian’s program — which analyses text for complexity and “randomness” to assess whether it was spawned by a human or machine — seems like a pretty good thing.
The college student shared links to his creation on Twitter this week, explaining how it was designed to “quickly and efficiently detect whether an essay is ChatGPT or human written.”
I spent New Years building GPTZero — an app that can quickly and efficiently detect whether an essay is ChatGPT or human written
— Edward Tian (@edward_the6) January 3, 2023
GPTZero seems to work pretty well. In my initial run with the app, I plugged in some text from a recent conversation with ChatGPT and, within seconds, it accurately deduced that the copy was “machine generated.” Next, I plugged in some writing from a recent blog of mine, and, again, it quickly figured out that it was written by a human. The more text you plug into the program, the better the results seem to be — so it helps if you add at least several paragraphs of copy for an accurate readout.
If you’re curious about how the whole thing works, you can head to Tian’s website to check it out for yourself.
GPTZero is pretty cool — though it just goes to show that, in our dystopian present, not only are machines writing stuff for us, but they’re also telling us whether they wrote it or not. Will humans maintain any cognitive abilities in the future or are robots going to do literally all of our thinking for us? In short: things seem pretty grim for future IQs.