
Enthusiastic supporters of Artificial Intelligence (AI) are particularly impressed by the amount of knowledge that AI apps seem to have. They are convinced that the apps know everything. All one has to do is ask them, and the AI bots will produce a huge amount of information on any subject within seconds. Such results are deemed accurate and reliable.
However, this is an illusion. AI cannot know anything because it does not perceive things. Its results are not accurate and trustworthy because AI treats everything as data, not truth. What AI can’t know must be analyzed to get the right perspective on this latest development in the cyberworld.
To Know or Not to Know
The first problem with AI is the nature of its “knowledge.” AI does not produce its information through a process of knowing the content of its subjects.
To know means to perceive directly, have direct cognition or understanding of something. It presupposes a knower. One cannot say that AI knows something because it is not a subject that is conscious of its actions. It can only perform processes initiated by humans at some point.
AI lacks this capacity to know. The AI apps gather, compile, and present findings according to pre-established procedures, mathematical probabilities and algorithms. These algorithms may be sophisticated and produce many results that simulate agency. But the AI is not doing anything. Without human programming, triggers and prompts, it is useless.
Producing Data, Not Knowledge
The second problem with AI is that it lacks the object of knowledge. Not only does AI have no knower, but there is no understanding of that which is to be known. The apps do not gather together knowledge, but only compile indiscriminate data.
Data is information in numerical form that can be transmitted or processed digitally. For the app, it has no meaning outside the collection of ones and zeros that serve as the basis for finding certain patterns, usages and criteria. Thus, the sublime truth of the existence of God and the latest stock market prices are the same thing. Truth and error are likewise indistinguishable in AI’s collections of zeros and ones.
AI systems use mathematical probabilities to gauge the best response to queries and dialogues. It will often spit out the most popular response, not the most accurate one, based on its data analysis. Thus, it can produce answers that satisfy users. However, erroneous information can and does find its way into even the most sophisticated systems. Data does not do nuance.
Millions of Lies an Hour
Thus, a final problem with AI systems is that they are not always accurate and reliable. The compiled responses of AI bots are only as accurate as the universe of data from which they draw. That universe may include trustworthy sites, as well as user-generated sites like Facebook, Reddit and Wikipedia.
A recent study of AI Overview analyzed AI-generated summaries that now appear atop Google search results. The feature uses an AI program called Gemini 3, turning Google from a curator of other people’s information into a content creator.
A New York Times study concluded that these AI Overviews are accurate around 90 percent of the time. With over five trillion searches every year, the number of erroneous answers is huge. Another article mischievously suggests that Google tells tens of millions of lies every hour. Every minute yields hundreds of thousands of inaccuracies.
No one in the real world can operate effectively with such scattershot accuracy. A 10% margin of error is an invitation to disaster.
To Err Is AI
This tendency to err is no secret. Every AI Overview response includes a Google qualifier in fine print beneath that, which says, “AI can make mistakes, so double-check responses.”
The ways of erring are as complex as the systems that run AI. For example, two identical Google searches, even made seconds apart, can yield different results: one accurate, the other not.
Other responses may have the right answer, but the linked sites provided no grounding evidence to support the response. Still other searches will get the desired detail correct, but err in additional contextual information.
The study also claims that AI can be manipulated. Those who know how to game the system can tailor their writing to get picked up in search result summaries.
It appears that one thing AI can’t always know is the truth.
On the Edge of the Abyss
The seriousness of the lack of AI accuracy and reliability is evident when users trust Chatbots to help them make life-changing or emotionally charged decisions. Users assume AI “knows” the state of their souls and can provide psychological help.
Such confidence in AI is often misfounded, as badly given advice has resulted in suicides, fatal pseudo-romantic relationships with AI and other damage in all age groups. Developers are finding themselves on the receiving end of lawsuits that hold them responsible.
Anthropic, the developer of Claude, is consulting Christian leaders on how to build “ethical thinking” into the machines. Without such direction, many feel that AI can lead society to the edge of the abyss.
Such fears have foundations. AI can take on a sinister character because there are no moral restraints in its data-driven world. It cannot know right behavior; only humans can. There are no “ethical” algorithms that can be introduced into AI design.
AI draws on data from the Internet, which reflects humanity’s decadent state. Avoiding the disaster that AI can cause must involve a reform of humanity. It means returning to the ways in which man was meant to know God and His law through human faculties. Then man can love and serve God and move away from the edge of the abyss.
Photo Credit: © terovesalainen – stock.adobe.com