To Their Surprise, Scientists Find Limits to Artificial Intelligence

To Their Surprise, Scientists Find Limits to Artificial Intelligence
To Their Surprise, Scientists Find Limits to Artificial Intelligence

Scientists have found a flaw in the computer processes collectively known as Artificial Intelligence (AI). Remarkably, a giant in the industry, Apple, has published their work. To say the least, it calls into question the machine-dominated future that many so-called experts see coming.

Knowledge and Intelligence

In many minds, knowledge and intelligence are synonyms. In reality, they are not the same. Knowledge is an accumulation of facts and information. Intelligence is the ability to apply knowledge. While at first glance, that may seem a distinction without a real difference, the two are distinct. Perhaps a less cerebral way to describe that disparity is the sentence, “He knows a lot of facts, but he lacks common sense.”

Facts are impersonal. They are the same for each individual. For instance, a mile is 5,280 feet, whether the person traveling that mile is a small child, an athlete or a jet pilot. How each of them applies that information, a function of intelligence, is very different. For the child, the distance may appear insurmountable. The athlete sees it as a matter of a very few minutes, while for the pilot, it goes by in fractions of a second.

Why America Must Reject Isolationism and Its Dangers

This scenario is a good way to consider the development of AI. No one doubts that computers can retain massive amounts of information and are adept at helping users gain access to it. That ability has created an industry and remolded whole academic and practical processes. Nonetheless, at least until recently, no one claimed that computers could render that information into intelligence.

What is Artificial Intelligence?

The promoters of AI attempt to convince the public that their products make that leap. In reality, though, they rely on combining ever-larger amounts of information. They may provide the user with a product that appears to be computer-created. However, the “creation” is simply mashing and arranging myriad pieces of information together.

Consider this “definition” of AI provided by Caltech (the California Institute of Technology).

“Artificial intelligence, often called AI, refers to developing computer systems that can perform tasks that usually require human intelligence. It’s like allowing machines to think, learn, and make decisions independently. AI technology enables computers to analyze vast amounts of data, recognize patterns, and solve complex problems without explicit programming. It involves the creation of intelligent machines that can perceive the world around them, understand natural language, and adapt to changing circumstances.”

One video that attempts to explain AI reduces the process to “teaching” machines three skills—how to learn, how to reason and how to correct themselves. The host enthuses that it can create “whole new worlds of music, art and ideas that humans have never even dreamed of.” (Apparently, it still struggles with that pesky rule about not ending a sentence with a preposition.)

Eternal and Natural Law: The Foundation of Morals and Law

Casting Doubts

However, a group of scientists at Apple has recently thrown sand in the eyes of those who see the future through an AI lens. Their paper carries an unfamiliar and uncomfortable title: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.

To understand the issue, it is essential to understand two acronyms common to the AI world, LLM and LRM. LLM stands for large language model. The premise is that a computer can duplicate an individual’s language patterns if the AI program has enough examples upon which it can draw it. So, if the program has sufficient knowledge of Saint Thomas Aquinas, I could ask it to produce a paper on any topic as if the angelic doctor wrote it. Obviously, that product would be a fraud, but it would read much like a genuine product of the great scholastic’s pen.

The second acronym, LRM, builds on the first. It stands for Large Reasoning Model. It goes where language cannot. For example, this office has a print of Antonello Da Messina’s “Saint Jerome in His Study.” An LRM could place Saint Jerome in a setting neither he nor Da Messina ever imagined. So, the program could produce a painting in which the translator of the Vulgate Bible drives a Ferrari, and that “painting” would be in the style of Da Messina. LRMs can do far more, but that simple example will be useful for the purpose of explanation.

A Useful Step?

No one doubts that the LRM is a big step above LLMs, which seems remarkable to those unfamiliar with AI. However, the scientists at Apple take issue with those who enthuse that its abilities are infinite.

The Apple paper consists of thirty pages of nearly indecipherable rhetoric, but these two relatively simple passages provide a clue to the authors’ reservations.

“We show that state-of-the-art LRMs still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero beyond certain complexities across different environments.”

“We find that there exists a scaling limit in the LRMs’ reasoning effort with respect to problem complexity, evidenced by the counterintuitive decreasing trend in the thinking tokens after a complexity point.”

In other words, there are certain types of problems that the LRMs simply cannot compute. Of course, it remains to be seen if the programmers can overcome this obstacle. However, it does indicate that there are problems with which the geniuses of the AI world have not yet dealt. How many more such hurdles lay around the next corner remains to be determined.

Prophecies of Our Lady of Good Success About Our TimesLearn All About the Prophecies of Our Lady of Good Success About Our Times

An Insurmountable Barrier?

However, it seems inevitable that such stumbling blocks will remain and increase as the issues connected to AI grow ever more complicated.

No one doubts that computers can do many tasks, which even the most skilled mathematician, scientist or engineer would find daunting. For instance, this author asked this computer to calculate the square root of 300,558. The answer, 548.231703, came back in less than a second. For a non-mathematician, this seems incredible.

Despite such phenomena, the human brain is still vastly superior to the computer. God, who is omniscient, omnipotent and omnipresent, created it as the crowning achievement of physical creation. In all the earth, the human brain is the only organ with a concept of itself. A human being can use a computer to get a description of a computer, but the machine cannot know that it is a computer. Such a concept of self, and therefore individuality, is only open to humanity. Except for the redemption of sins, it is the greatest gift of the Almighty to any of His creatures. He did not, however, extend to us the ability to transfer it to a machine, no matter how complex that machine may be.

Photo Credit:  ©stokkete – stock.adobe.com

Related Articles:

Share to...