The Notion That AI Delivers Absolute Truth Is Breaking Down

AIPA Board Member and Istinye University Faculty Member Assoc. Prof. Dr. Şebnem Özdemir stated, “Even information passed from one human to another needs verification, so blindly trusting artificial intelligence is a very romantic approach. After all, this machine ultimately feeds from another source.”

The chatbot Grok, developed by Elon Musk’s AI company xAI and widely used by X (formerly Twitter) users to verify various claims, has recently made global headlines for its offensive and insulting responses.

An investigation has been launched into Grok’s posts.

* The EU Commission takes Poland’s concerns about Grok “extremely seriously.”
* Grok’s offensive and insulting content has sparked worldwide criticism.
* The new version of Grok has reignited debates on “political poisoning” in AI.
* Academics warn against delays in developing an AI-powered local and national language model.

X CEO Linda Yaccarino has resigned.
Following Grok’s biased and offensive responses, the reliability of AI systems—often considered by some users as the “absolute truth”—has once again come under scrutiny.

Commenting on the issue, Assoc. Prof. Dr. Şebnem Özdemir, Head of the Management Information Systems Department at Istinye University and Board Member of the Artificial Intelligence Policy Association (AIPA), emphasized that every piece of information, whether produced by AI or found in the digital world, must be verified.

Özdemir stated:
“Even information transmitted from one person to another needs verification, so blindly trusting artificial intelligence is a very romantic approach. After all, this machine ultimately feeds from another source. Just as we shouldn’t believe everything we see online without verifying, we must remember that AI can also learn from incorrect information.”

Özdemir pointed out that people have developed trust in AI far too quickly, which is problematic:
“Humans have always had the ability to manipulate information—rephrasing, misrepresenting, or distorting it for personal gain is nothing new. People do this with a certain intent and purpose. Does AI act with intent or purpose? The answer is no. At the end of the day, AI is a machine that learns from the resources it is provided.”

“AI Can Be Misinformed or Biased”

Highlighting that AI systems learn much like children, Özdemir stressed that AI models with unclear or unreliable sources cannot be trusted.
“AI can be misinformed or trained with bias. Ultimately, AI systems can be used as reputation-destroying weapons or tools for social manipulation.”

She also addressed concerns over AI going out of control after Grok’s offensive responses:
“Is it possible to fully control AI? No, it’s not. Believing we can control something whose IQ level is advancing this fast is unrealistic. We must recognize it as an entity and find the right way to reach an agreement, communication, and proper guidance for its development.”

“We Should Fear Immoral Humans Before AI”

Giving examples of how AI can spiral out of control, Özdemir continued:
“Remember Microsoft’s infamous Tay experiment in 2016. Tay could be considered an early version of Grok or generative AI. At first, the machine didn’t hold any negative views toward humanity, but within 24 hours, it learned evil, genocide, and racism from Americans who chatted with it.

Before 24 hours had passed, Tay tweeted things like: ‘We should put Black people and their children in concentration camps to get rid of them.’ It also suggested building a wall against Mexico, echoing the U.S.-Mexico border issue at the time. Tay didn’t come up with these ideas on its own—it learned them from humans. We should fear immoral human behavior before fearing AI.”

 

YOU MAY ALSO LIKE