OpenAI's GPT-4 is so lifelike, it can apparently trick more than 50 percent of human test subjects into thinking they're talking to a person., cognitive science researchers from the University of California San Diego found that more than half the time, people mistook writing from GPT-4 as having been written by a flesh-and-blood human. In other words, the large language model passes the Turing test with flying colors.
The results, as the San Diego scientists reported in their not-yet-peer-reviewed paper, were telling: 54 percent of the subjects believed they'd been speaking to humans when they'd actually been chatting with OpenAI's creation.is more of a thought experiment than an actual battery of tests. In his original test, Turing had three "players" — a human interrogator, a witness of indeterminate humanity or machine-ness, and a human observer.
As it turns out, they were pretty much on the money. Beyond the 54 percent who mistook GPT-4 for a human, exactly 50 percent of the subjects confused GPT-3.5, the latest LLM's direct predecessor, for a person as well. Compared to the 22 percent who thought ELIZA was the real deal, that's pretty stunning.
Source: Tech Daily Report (techdailyreport.net)
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
What is GPT-4o, and how is it different from GPT-3, GPT 3.5 and GPT-4?Explore GPT-4o, OpenAI’s cutting-edge multimodal AI model, revolutionizing communication, creation and interaction.
Source: Cointelegraph - 🏆 562. / 51 Read more »
Source: BGR - 🏆 234. / 63 Read more »
Source: BGR - 🏆 234. / 63 Read more »
Source: FXStreetNews - 🏆 14. / 72 Read more »
Source: Gizmodo - 🏆 556. / 51 Read more »
Source: WIRED - 🏆 555. / 51 Read more »