UPDATE (2/1/2023): The recent craze of chatGPT based on the GPT-3 engine simply further illustrates the truths of this post. The first impression of chatGPT is “wow”, but the later realization of its tremendous weaknesses make you want to run away and never trust it again. On many technical matters, chatGPT, today, is simply wrong, wrong, wrong. At times it describes things in completely opposite ways as to how they work. But it speaks with such force that an unknowing individual would believe its every word. This is the danger of AI that attempts to act like a human and indicates that it is trustworthy – of course, this is a problem with humans too – some are simply liars and others are in error, but humans can speak error in confidence as well. In the end, chatGPT is not at all a leap forward in AI – it is a leap forward in Big Data (I probably just ruffled some feathers there), the AI still lacks significant capabilities.
As human beings, with human-level intelligence and cognitive ability, we can indeed have things occur to us. That which occurs to us comes into our mind without necessary awareness of the stimulus that generated the thought, if one can argue that an external stimulus always exists in such cases. Some examples of occurrence may help to illuminate my point:
- You’re attempting to think of the name of an actor or actress and give up only to have the name (apparently) randomly occur to you at a later time when you are no longer thinking about it.
- You attempt to solve a problem at work and simply cannot come up with a good solution and then, after having given up, a solution occurs to you hours or days later.
- You are walking along and a word pops into your mind for no apparent reason whatsoever and all you can do is wonder why.
This phenomena may be explained as long-term priming in many incidents and in others they seem to defy explanation. Long-term priming simply means that you have intentionally or unintentionally provided input to your brain and it goes about seeking linked information without your awareness. George Mandler and Lia Kvavilashvili have researched this reality of the human brain extensively and they suggest long-term priming is a common cause of occurrence; however, they also suggest that it does not explain all such cases.
Regardless of the explanation, one thing is sure: we do not know how to tell our brain to fetch information and bring it to the conscious level upon retrieval. We do not invoke an internal algorithm, though one may exist of which we are not aware, and it doesn’t always work. At times, being aware of this reality, I will give up on the immediate retrieval of information with the assumption that my subconscious brain will do the work and get it for me. Sadly, it lets me down as much as it satisfies me. And, yet, occurrence occurs when I least expect it.
This is human intelligence. Artificial Intelligence (AI) simply has no way of accomplishing this behavior. Not in the way it works in humans. Now, a programmer may indeed write code that looks for a particular word related to several different concepts and then returns the results upon completion. Additionally, depending on the size of the AI universe (the scope of data considered by the algorithms), it may take hours, days, weeks or years to complete, but this is still not the same thing. We do not tell our brain to go do this or that work and then get back to us. It simply does it. Programming such behavior is beyond the realm of reasonableness today, again, in the way humans do it.
The concept of episodic retrieval (recovering memories, possibly with feelings and other sensations) has been studied extensively in neuroscience and we can even determine the primary areas of the brain involved in such retrieval. Some studies, though far less extensive, have explored the wandering of thought (closely related to the occurrence phenomena of which I speak) and have mapped areas of the brain engaged at such times. However, given that activity is shown in practically all regions of the brain during wandering, little useful information is revealed. But, alas, herein lies the true dilemma: knowing the part of the brain that is active in no way reveals the actual process invoked. Only theories can guess at the processes. Therefore, we are left to our creativity in attempts at accomplishing anything close to human intelligence within computing.
And here is my bold summary statement: It is my belief that we will never see AI reach the level of intelligence of which humans are capable, not quite, not in my lifetime or yours. We will get close (though it will still take many decades – possibly centuries at the current pace of increase), but we may never get “there” – to that point where things truly occur to a computer as they do to humans. I include the following exception: if we invent a completely different kind of computing device that is in no way ultimately based on binary logic, we may succeed. Otherwise, let’s just keep reminding ourselves that AI is a collection of one or more algorithms and that these algorithms must be coded by humans (the real source of intelligence).
As a clear disclaimer, I am in no way opposed to AI conceptually. I am opposed to misleading people about what AI has been used to accomplish in the past, what it is used to accomplish in the present, and what it will be used to accomplish in the near future. The colloquial understanding of AI is woefully errant and it is up to those in the sciences and technology sectors to clearly communicate reality. And, by the way, for those companies selling products labeled with AI and machine learning that are doing absolutely nothing different from what they did before, shame on you for capitalizing on the cultural misconceptions of what AI is really all about.
Given that this reality occurred to me only a few minutes ago, I chose to write this quickly before I lost it from mind only to have to wait hours, days, weeks or years before it popped into my mind again.
NOTE: Looking back over this article, I realized that I too have fallen into the trap of referencing AI as if it is a self-existing, self-aware, intelligent entity. I rewrote portions of the article to eradicate it, but given that it is a Saturday, I chose not to exhaustively rewrite it completely. As an example, I had used the phrase “what AI has done in the past, what it is doing in the present, and what it will do in the near future.” Simple fact, AI hasn’t done anything in the past, in the present, nor will it in the future. To say that AI has done something is to say that a hammer has built a house. That’s foolishness. The carpenter builds the house. The user gives the command to the computer to perform an act and, therefore, the user does things with AI. AI simply sits in the memory of your computer like a mindless piece of metal until you or some other system gives it a command. Therefore, AI will never get the credit for good deeds nor the blame for evil deeds. Both will always go to the wielder of the tool.