Human or Not was a game where you looked at an image and had to guess if it was real or made by AI. It was a fun way to test your skills and learn about artificial intelligence (AI).
Have you ever played the game Human or Not? It’s a fun game where you look at an image and guess if it’s real or made by artificial intelligence (AI).
But recently, the game has come to an end. If you’re wondering why, read this article.
In this blog post, we’ll explore the conclusion of the experiment Human or Not.
The Conclusion of the Experiment Human or Not
The experiment of Human or Not has yielded some interesting insights. Since its launch in mid-April, more than 15 million conversations have been conducted by more than two million participants from around the world.
After analyzing the first two million conversations and guesses, the following insights were gathered:
- 68% of people guessed correctly when asked to determine whether they talked to a fellow human or an AI bot.
- People found it easier to identify a fellow human. When talking to humans, participants guessed right in 73% of the cases. When talking to bots, participants guessed right in just 60% of the cases.
- France had the highest percentage of correct guesses out of the top playing countries at 71.3% (above the general average of 68%), while India had the lowest percentage of correct guesses at 63.5%.
- Both women and men tended to guess correctly at similar rates, with women succeeding at a slightly higher rate.
- Younger age groups tended to have correct guesses at slightly higher rates compared to older age groups.
Participants also employed various strategies to figure out if they were talking to a human or a bot.
These strategies included identifying typos, grammar mistakes, and slang, asking personal questions, assuming bots aren’t aware of current and timely events, challenging the conversation with philosophical, ethical, and emotional questions, identifying politeness with something less human, attempting to identify bots by posing questions or making requests that AI bots are known to struggle with, or tend to avoid answering, using specific language tricks to expose the bots, and even pretending to be AI bots themselves to assess the response of their chat partners.
AI21 Labs, the team behind Human or Not, plans to study the findings in more depth and work on scientific research based on the data from the experiment.
The goal is to enable the general public, researchers, and policymakers to further understand the state of AI bots, not just as productivity tools, but as future members of our online world.