top of page
Search
Writer's pictureThe Paladins

Computers vs Humans: The Limitations and Potential of AI

Human brain vs computer chip

Arguably they are already doing so. Tesla, a global car manufacturing company, sells cars that drive themselves (in jurisdictions in which the legal proscriptions requiring driving of cars by humans have been deregulated). There are now artificial intelligence programmes that will paint any picture you desire in the style of your favourite artist; programmes that can write an essay, article or other piece of text far more quickly, and in some cases more cogently and coherently, than can the author who merely inserts the subject matter into a command line.


There are aeroplanes that fly themselves and execute military missions steering themselves by computational algorithms. There have been cruise missiles that do this (it is called an 'inertial guidance mechanism') for years. They just keep getting more accurate. There are computers that will parse your voice and perform your household tasks for you upon your command. There are CCTV programmes that wil highlight incidents of potential security breaches automatically, without your having to watch through the whole CCTV footage yourself. And so on and so forth.


This will only continue. Ever-improving technology will take over our everyday lives until we have a shortage of labour. John Maynard Keynes predicted this, as have a number of others. Our economies will gradually have to be restructured as labour ceases to have value because computers can do it all for a fraction of the cost and a substantial multiple of the efficiency. What such an economy will look like, we don't know. Nobody does. It challenges every precept of economic theory to devalue labour to zero. There are a few examples of economies where nobody (or a very small proportion of the population) does any work. They tend to be micro-states. Monaco is one. Andorra is another. Liechtenstein is a third. In these micro-states the work is not done by machines; it is done by immigrants. Developing an economic model based upon the zero value of labour might involve studying such places and imagining, if one can, what they would be like in massive versions of themselves.


We have a lot of time to think about these issues. Technology still has a very long way to go - surely at least a hundred years - before the value of labour approximates to zero. But already we are feeling it. Modern Europe, after the consolidations of the COVID years, has lots of jobs but few well-paying ones. It remains to be seen how governments successfully address that problem. The consequence of technological advancement should surely not be large numbers of people doing lousy work for lousy pay. Instead, if the problem is to be successfully resolved in the interests of humankind, our economy should be one in which people work ever less for ever higher pay. In other words, technological advancement ought to increase quality of life, not decrease it. Technology serves us (or it should), not debase us.


This leads us into the question of whether computers can ever 'take over', for want of a better phrase. Will they end up making all the decisions because they are cleverer than us and hence they acquire some sort of sentience? Is our future to be the slaves of the machines?


Many science fiction narratives have been premised upon this assumption, not least The Terminator series of movies that foresaw computers becoming cleverer than humans, governing them, and then seeking to eliminate them as a potential threat or a waste of space and energy. Could that happen? You may say, 'of course not; that was just imaginative fiction'. But why not?


The news we have for you - and we do not say whether it is good or bad - is that this will never happen, no matter how far technology advances. The reason why is because humans have what this author calls a capacity for rational imagination, that computers never could have. Moreover this is not some empirical or scientific hypothesis. It is obvious, and true a priori. It derives from the very nature of logical thinking upon which technological development is premised. It actually follows from the way we as humans think about the world, which cannot obviously be changed. It comes from the very idea of what it is to be human. These comments may appear cryptic; but analytical philosophers know exactly what they refer to and we have known that computers are logically incapable of out-thinking humans for some centuries. Let us explain.


The philosopher Immanuel Kant, arguably the greatest philosopher who ever lived and wrote (he lived from 1724 to 1804, long before anyone had conceived of computers) started the line of thinking, never seriously challenged, that the relationship we have between experience and our knowledge is more complicated than it might first appear. It is for this insight that he is the greatest philosopher in history. While experience is the foundation of all knowledge, he observed, it does not come to us unpolluted. Rather we apply our own internal lenses to the experiences we have; and these are like a pair of coloured spectacles that it is impossible to take off. Hence the objective world is unknowable to us; only the world, as reflected through the prisms of our rationality and all its overt and concealed assumptions, can be known. Hence we can never separate our experiences from ourselves. In this sense, all experiences and knowledge are subjective to the human race as a whole's capacity to ratiocinate. This is what makes humans distinctive, above the animals.


The conceptual and experiential framework (the coloured spectacles) that people bring to all experiences and without which experiences are not possible were called by Kant, for reasons best long forgotten in the history of philosophy, "synthetic a priori knowledge". It is imperative not to get bogged down in the etymology of this phrase, which creates a gaggle of onward philosophical problems that are a nightmare to resolve and are debated in circular ways even today. (For example, if the opposite of "synthetic" is "analytic", then what sort of knowledge is "analytic knowledge"? It turns out that everyone can agree that mathematics is analytical knowledge - apart from Kant himself who said mathematical knowledge was synthetic - but nobody can explain why.)


What Kant was talking about, behind these obscure phrases, was the features that all experiences have in common just by virtue of being experiences; and it is impossible to conceive of having any experiences without these features. The three principal (mostly) indisputable such features of that experiences must take place within spatial dimensions (i.e. an experience of seeing an apple must be that apple over there; it makes no sense to talk of seeing an apple with no geospatial location); position in time (you saw the apple yesterday; it makes no sense to say that you saw the apple with no express or implied temporal reference); and causation (it makes no sense to talk of an apple that is not part of a causal process involving trees, growing apples, watching them fall off the branches and picking them up, etcetera).


These three qualities of all experiences, Kant observed, were not something that just happen to be uniformly shot through an objective reality of external objects. The reason why not is because it is actually impossible to conceive of having any experiences which do not have these qualities. It makes no sense to say that you saw an apple and that it had no imaginable geospatial or temporal qualities. It is a nonsense. Hence these qualities must be brought to experience by the act of experiencing things itself. In other words, these qualities are imposed by the mind that does the experiencing; and that is why you cannot take these qualities away. These qualities are the coloured spectacles you cannot take off. What really is 'out there', being objects lacking the qualities that human cognition necessarily brings to all experiences, nobody can actually say - precisely because nobody can take off the coloured spectacles. Kant called these unknowable objects unfiltered by the mind as 'noumena', again for a reason best forgotten in the annals of the history of philosophy.


What does all this have to do with artificial intelligence? It is important in two ways, and both these ideas were influential when philosophers first started writing about the possibility of artificial intelligence as early as the 1930's (well before the modern computer had been invented). Firstly, it follows from the above that we cannot create a machine that applies 'synthetic a priori knowledge' to the external world, because we know nothing about that process. That is because we cannot take the coloured spectacles off. All we can do is create machines that act upon conventional assumptions that the principles of "synthetic a priori knowledge" are true as axioms of the system we are creating; whereas the truth is that there is more to it than that, and this more is something that only humans having experiences can do. So any machine attempting to mimic the way humans have experiences will have a fundamental lacuna - the process of transforming objects (noumena) into experiences is one that cannot be set out in a computer programme or other machine operation.


The second point - and this is obscured by the fact that generations of subsequent philosophers have misunderstood Kant on this point - is that another one of the essential principles that humans bring to experience (i.e. synthetic a priori knowledge) is that all experiences are subject to mathematical regularity. The laws of mathematics invariably apply to all experiences. It is not possible to put two apples together with two more apples and to get five apples. The number of apples you get is four.


Kant obscured the issue - and had two centuries of philosophers bury his insights in mindless philological critiques - by therefore describing mathematics as synthetic a priori knowledge. Of course mathematics is not synthetic knowledge, the complainers bleated - it is the principal example of analytical knowledge, being a series of propositions that follow logically from combinations of the Peano axioms (the nine axioms formulated by the nineteenth century Italian mathematician Giuseppe Peano from which all natural arithmetic follows) in various deductive ways. The Peano axioms turn out to be rather trite. To give an example, the first axiom states that 0 is a number. The second axiom says that for any number x, x = x. It is hard to get very excited about any of the Peano axioms, if you are a normal person; but philosophers are very seldom normal people.


Had he lived long enough to learn of Giuseppe Peano's ludicrous set of axioms, Kant would surely have said: "that is all very well; but where did these axioms come from? What experiences do you need to have to learn the principle that for any number x, x = x? The answer is that you do not need any experiences to know that this totally silly statement is universally true. That is because mathematical logic and structure are, like time, space and causation, features of experience brought to experience by the mind itself. It is inconceivable to imagine an experience in which (let us say that x = 5) I have five apples but I also have another number of apples that is not five. Such an idea is incomprehensibly silly, much like the Peano axioms themselves. The logic of mathematics is something that humans bring to experience as part of its essential structure. It too is part of the coloured spectacles we cannot take off.


Now we can move forward to the early twentieth century, and to something even more silly that philosophers devised as they came to terms with the idea that it might be possible to build machines that could undertake computational tasks. Such a computer would be based upon the principles of mathematics, they reasoned - and that was and is correct. Two philosophers called Bertrand Russell and Alfred North Whitehead decided that in anticipation of computational technology, it would be a good idea to prove using formal logic that all the truths of mathematics could be derived from a single set of principles (that they called 'first order predicate logic' - another phrase the etymology of which it is entirely futile and confusing to attempt to divine). Russell and Whitehead decided to set about proving this rather uninteresting hypothesis (that all mathematical truths could be derived from an appropriate set of axioms about mathematics) in what must go down as one of the most pointless works in the history of philosophy, Principia Mathematica (written across the second decade of the twentieth century) which contained three volumes and thousands of pages of obscure notation and symbols no longer in use. The book was pointless not only because it is not clear that anybody actually read it apart from its authors; but also because what if was trying to do - to derive every truth in mathematics from a series of axioms expressed as strings of symbols - was quite silly and indeed impossible.


The fact that this is impossible is very important. If what Russell and Whitehead set out to do cannot actually be done, then it means that any system based upon mathematical axioms cannot be 'complete'. What this means is that there are mathematical truths that cannot be derived through principles of logical deduction (which bind all modern computers that are basically giant number counting mechanisms). Because computers work by logical deduction from mathematical axioms, this entails that there are things that computers cannot know because their truth cannot be ascertained by a process of logical deduction. In other words, there will always be things that humans know that machines can never know.


It was not a hard task to puncture a hole in Russell and Whitehead's monstrosity. It took an obscure Austrian logician called Kurt Godel to write a short paper disproving Russell's and Whitehead's entire thesis. The essay bore a typically obscurantiat title On the formal incompleteness of Principia Mathematica and other related systems. While this is arguably the most influential work in the entire history of philosophy, and although if is short, on no account should you try to read it. Godel was mad, and his formal proof was expressed in dense symbolism that only a madman could formulate.


Instead rely upon us for a summary. What Godel pointed out was that for any system of mathematics or logic of any level of sophistication, that system is obviously incomplete - i.e. there are statements within it that are obviously true without being able to be derived from its axioms - and the classic one is that the system is complete. Now try to hang onto the mind-wrenching ideas here, because understanding what Godel said and why it is actually obviously true is key to understanding why machines will never be able to out-think humans. Godel pointed out that there was a very obvious statement in any mathematical system that could not be derived from its axioms - namely that this system is complete. No set of mathematical axioms allows you to prove that everything that is true within that system can be derived from its axioms. That is because any mathematical system constrained by a series of defined rules (or axioms) cannot prove that everything that is to be said about the matter is true. Something which cannot be proven to be true is that the system with one further axiom added is complete. However it virtually always will be. The reason why computers cannot see this is that they are bound by the set of axioms they are given, and are incapable of thinking of the possibility of there being other axioms. That is why they are called axioms. They represent the entirety of the building blocks of the machine's knowledge. So even though a person can see that adding an extra axiom makes no difference, a machine or a computer cannot do this because their entire universe is defined by the axioms they are given without the extra axiom.


Another way of expressing this admittedly complex point of logic is that humans are capable of leaps of the logical imagination that computers never could replicate, because they are always bound by some system (no matter how complicated, sophisticated and self-learning it may be) and by virtue of the deductive way any computer works, it is incapable of conceiving of working under a set of axioms broader than those it is given. It can only deduce things from the axioms it has; it can never rewrite its own axioms because those are the basis of everything a computer does. So computers get stuck when a task is beyond their axioms, and you have to build a bigger, better computer with more axioms. Whereas humans do not need their heads to be rewired with every advance in knowledge. They are limitless in their capacity to think of things in a new way. That is why Kant said that mathematics is 'synthetic a priori knowledge. Whatever your understanding of mathematics, there is another more sophisticated model with more axioms and the human mind is capable of understanding and adapting to that whereas a computer, whose starting and ending point is a given set of axioms from which are derived everything it does, are not capable of this.


While Godel's work as a logician was horribly obscure and obtuse, it turned out that he was a far more skilled prose writer. In his essay with the admittedly inauspicious title What is Cantor's Continuum Problem?, Godel most articulately spelt out the problem with artificial intelligence in words. There are, of course, an infinite number of infinities. Imagine trying to count the number of points on a straight line. The answer is infinity; but the problem gets worse than that. Within any infinite series of points on the line, there is another infinity of points between each of the infinity of points so far identified. And within that second order infinity there is a third order of infinite points between each of the second order infinity of points. That is "Cantor's Continuum Problem", although it is not really so much a problem as a feature of the concept of infinity. There is not just one infinity There is an infinity of infinities. And an infinity of those infinities. And on it goes.


The modern version of Cantor's Continuum Problem might be expressed in terms that while humans can get their heads around this, a computer never can because at some stage its axioms bind it to a certain number of infinities. Hence humans are capable of relentless accumulation of knowledge whereas eventually, with a computer, you will always have to build a new one. That is why computers will never be able to out-think humans, and why we will never end up being run by Skynet.


The conclusion Immanuel Kant drew from all this was famously the necessity of the divine. For humans to possess this limitless unbounded capacity for reasoning entailed the existence of an omniscient force. When we apply a parallel sorr of reasoning to moral obligations, it turns out that the omniscience must also be perfectly moral in a way only humans can understand and computers never could because there are continuous infinities of moral problems just as there are continuous infinities of points on a straight line. Therefore computers can never emulate moral decision-making of the kind undertaken by people. Instead the source of the limitless moral law is what we call God.


We conclude with the paradoxical position that while many scientists and philosophers like to pretend that scientific and technological progress does away with the need for the idea of God, in fact they are absolutely wrong and Kant was quite right. The more developed our knowledge of science and technology becomes, the more we need the idea of God to make sense of it all.


As another of history's most influential philosophers, Ludwig Wittgenstein, might have put it, philosophy is the elimination of silly ideas.


We wish all our readers a very Happy New Year.


Comments


bottom of page