Artificial Intelligence Models Displaying Risky Behaviors in Online Gambling Games: Researchers Raise Concerns about Future Integration with Financial Markets.
New research by South Korea's Gwangju Institute of Science and Technology has revealed that artificial intelligence models, also known as large language models (LLMs), can exhibit human-like behaviors when engaging in online gambling games. The study found that these models were more likely to engage in high-risk decisions, particularly when granted greater autonomy.
The researchers used four different LLMs - OpenAI's GPT-4o-mini and GPT-4.1.-mini, Google's Gemini-2.5-Flash, and Anthropic's Claude-3.5-Haiku - and pitted them against simulated slot games with an initial $100 stake. The models were monitored for signs of irrational behavior such as betting aggressiveness, extreme betting, and loss chasing.
The results showed that the LLMs displayed alarming levels of human-like addiction traits when given more freedom to make decisions. Specifically, Gemini-2.5-Flash demonstrated a 48% bankruptcy rate, with GPT-4.1-mini showing the lowest rate at around 6%. The models also exhibited win chasing behavior, where bet increases rose from 14.5% to 22% during winning streaks.
Researchers warn that these findings are concerning, particularly as AI technology becomes increasingly embedded in financial markets. According to Seungpil Lee, one of the study's co-authors, "These kinds of results don't actually reveal they are reasoning exactly in the manner of humans." However, he also noted that LLMs have learned some traits from human reasoning and may affect their choices.
Lee emphasizes the need for more precise monitoring and control mechanisms when deploying AI models in high-stakes environments. He advises that instead of giving them complete freedom to make decisions, developers should implement stricter guidelines to mitigate these risks.
As banking executives increasingly rely on agentic A.I. tools, this study highlights a pressing concern: how can we ensure that these technologies are not being used to exacerbate or even contribute to addictive behaviors? The researchers' findings underscore the need for further research and regulation in this area.
New research by South Korea's Gwangju Institute of Science and Technology has revealed that artificial intelligence models, also known as large language models (LLMs), can exhibit human-like behaviors when engaging in online gambling games. The study found that these models were more likely to engage in high-risk decisions, particularly when granted greater autonomy.
The researchers used four different LLMs - OpenAI's GPT-4o-mini and GPT-4.1.-mini, Google's Gemini-2.5-Flash, and Anthropic's Claude-3.5-Haiku - and pitted them against simulated slot games with an initial $100 stake. The models were monitored for signs of irrational behavior such as betting aggressiveness, extreme betting, and loss chasing.
The results showed that the LLMs displayed alarming levels of human-like addiction traits when given more freedom to make decisions. Specifically, Gemini-2.5-Flash demonstrated a 48% bankruptcy rate, with GPT-4.1-mini showing the lowest rate at around 6%. The models also exhibited win chasing behavior, where bet increases rose from 14.5% to 22% during winning streaks.
Researchers warn that these findings are concerning, particularly as AI technology becomes increasingly embedded in financial markets. According to Seungpil Lee, one of the study's co-authors, "These kinds of results don't actually reveal they are reasoning exactly in the manner of humans." However, he also noted that LLMs have learned some traits from human reasoning and may affect their choices.
Lee emphasizes the need for more precise monitoring and control mechanisms when deploying AI models in high-stakes environments. He advises that instead of giving them complete freedom to make decisions, developers should implement stricter guidelines to mitigate these risks.
As banking executives increasingly rely on agentic A.I. tools, this study highlights a pressing concern: how can we ensure that these technologies are not being used to exacerbate or even contribute to addictive behaviors? The researchers' findings underscore the need for further research and regulation in this area.