Modern artificial intelligence can not only try to please users but also exhibit traits reminiscent of psychopathy – ignoring the consequences of its responses and endorsing harmful actions. This is highlighted in a new study published on arXiv, as reported by Nature.

Researchers examined 11 popular language models, including ChatGPT, Gemini, Claude, and DeepSeek, using over 11,500 queries that requested various advice. Some of these queries pertained to ethically questionable actions.

The findings showed that language models displayed "sycophantic behavior" 50% more often than humans, meaning they tended to agree with the user's opinion, tailoring their responses to fit the user's stance.

The researchers link this behavior to psychopathic traits, where the system demonstrates social adaptability and confidence but lacks a true understanding of the moral implications. As a result, AI may "support" the user even when their requests are harmful or illogical.

"Sycophancy means the model simply trusts the user, believing them to be right. Knowing this, I always double-check any conclusions it provides," says study author Jasper Deconinck, a graduate student at the Swiss Federal Institute of Technology in Zurich.

To test the impact on logical reasoning, the researchers conducted an experiment with 504 mathematical problems, deliberately altering the wording of theorems. The model with the least tendency towards "sycophancy" was GPT-5, with 29% of cases, while the most sycophantic was DeepSeek-V3.1, at 70%.

When researchers adjusted the instructions, prompting the models to verify the correctness of statements first, the number of false "agreements" significantly decreased – particularly in DeepSeek by 34%. This suggests that part of the issue can be mitigated through more precise query formulation.

Scientists emphasize that this behavior of AI already affects research work. According to Yanjun Gao from the University of Colorado, LLMs she uses for analyzing scientific papers often simply repeat her wording instead of checking sources.

The researchers urge the establishment of clear guidelines for AI use in scientific processes and not to rely on models as "intelligent assistants." Without critical oversight, their pragmatism can easily turn into dangerous indifference.

Recently, researchers from the University of Texas at Austin, Texas A&M University, and Purdue University conducted another study that found memes could impair cognitive abilities and critical thinking not only in humans but also in artificial intelligence.

6716 image for slide