top of page
Search
Writer's pictureThe Paladins

What's wrong with modern artificial intelligence?



The phrase “artificial intelligence” is extremely popular at the current time, with all manner of people offering opinions about it from newspaper journalists to technology company billionaires and academic gurus. We are also told that artificial intelligence is going to revolutionise the way that humans live, and even that it is already doing so. But are these things really true?


We need to begin, as always, by defining our terms. The British cryptologist Alan Turing attempted an early definition that has stood the test of time, even as different sorts of computer and machine learning algorithms, some of them very different from one-another, have emerged under the umbrella term of artificial intelligence. Turing defined artificial intelligence as activity on the part of a computer indistinguishable from that of a human. In his historically famous Turing test, Turing proposed placing a human on one side of a wall and a computer on the other side, and asking whether the human would be able to interact with the computer in such a way as to tell that it was in fact a computer rather than a human. A classic example of the Turing Test might be an online game of Chess. To what extent is it possible for a human competitor in an online game of Chess to be able to tell whether he or she is playing another human or playing a computer?


Turing undertook his early theoretical work in the field of artificial intelligence in the 1950’s. Since then there were a number of blind alleys and it was not until the early part of the second decade of the twenty-first century that there was substantial increased optimism about the possibilities of artificial intelligence algorithms and increased investment in the field. A decade later, the field is booming and it is scarcely possible to leaf through a daily newspaper without some reference to artificial intelligence or a new application of artificial intelligence being discussed.


The various different sorts of artificial intelligence algorithm that enable machines to emulate complex human activities are all based around the idea that human intelligence is exhibited in the capacity to learn from new experiences. There are a variety of different machine learning algorithm types, and in many cases they have been developed in parallel and without significant reference each to the other; but their outcome in each case is often much the same. One gives the machine access to a broad database, which increasingly is the entirety of the world wide web.


The programmer of artificial intelligence software gives the machine the capacity to search the database in response to a particular sort of problem put to it; and then one allows the machine to absorb feedback about its response, so that next time it is asked to undertake a task it prioritises database searches that it was told were successful; it may also learn from the human responses to its outputs, for example where a machine is having a dialogue with a human. In this way, machines learn, in the same way as humans do, so it is imagined; and if they learn enough, and they have large enough memories and sufficiently high processing power, then in time they will learn as much as a typical human knows or even more, and their performance will be indistinguishable from that of humans or they may even outperform humans in the exercise of cognitive functions. At least this is the theory. And to an extent, it has proven remarkably fruitful, particularly recently. Machines are now able to write (but not yet speak) natural languages, as the artificial intelligence software ChatGPT has demonstrated. You can ask ChatGPT any question or ask it to write an essay or article, and it will compose an answer in flawless, fairly formal English having crawled the world wide web for appropriate content.


However a number of issues remain.


We have not yet it seems been able to write artificial intelligence algorithms that can accurately copy the use of natural language slang or informal terms. Of course vulgar or rude terms can be used by artificial intelligence algorithms; you just supply them with a dictionary that contains the relevant rude words. (In practice most commercially available such algorithms have "negative dictionaries" containing these words, and they are prohibited from containing sentence structures with swear words in them. Other programmers have written profanity-laced bots that only reply to you with swear words.) The challenge with slang, however, is that it is defined as the use of natural language otherwise that in accordance with the rules of that language. Therefore it involves an essential element of creativity in breaking the rules of a language in a way that still renders the language meaningful, for example by drawing analogues between different spheres of human activity (e.g. analogies between toilet functions and activities for which one has general distaste). This sort of creativity is something that machines do not seem to be able to replicate.


The concepts of truth and knowledge are difficult ones to understand, and philosophers have been arguing for centuries about their meaning, without a conclusive resolution of those disputes. Because we cannot define these terms, it is impossible for us to write code that enables computers to capture these concepts in an algorithmic nature. And the capacity to tell the truth is a quintessential quality of human thinking, as is the ability to distinguish truth from falsehood. If we cannot even define all these terms, than we will have little hope in programming them into machine learning algorithms. At best we will be able only approximately to simulate the methods by which humans go about learning things; but we will never be able to write exhaustive code about such subjects because we cannot define our terms.


Most philosophers in history have agreed that there are things called propositions, which are linguistic terms that are either true or false. We can only know things that are true, so most philosophers agree; we cannot know things that are false. However some have argued that the concept of knowledge is so complex that even this apparent truism does not hold water: some have argued that knowledge, and hence truth, is relative to a society or a set of cultural or social values. These sorts of theory are sometimes known as the coherency theory of truth (in particular amongst continental European philosophers, who draw their inspiration from Hegel and Marx) or the pragmatist theory of truth (in particular amongst American philosophers). These schools of thought about the concept of truth coalesced in the second half of the twentieth century in a sociological / linguistic discipline called "critical theory", in which a number of conventional assumptions about sociology and philosophical knowledge were criticised as being culturally relative.


For the advocates of coherency or pragmatist theories of truth, a statement is true if it is consistent with other statements. The result of this is that truth and even knowledge are relative to a society's beliefs. What is true can change over time. When everybody believed the world was flat, it was true that the world was flat. This is a profoundly anti-scientific, anti-Enlightenment way of thinking about truth and knowledge, because it may also advocate that the world is dominated by a hierarchy of Gods and Devils, or that the Mappa Mundi is an accurate cartographic instrument, or any manner of other nonsense that prevails in the society of the day. The Enlightenment period of thinking in the eighteenth century tried to cast off such primitive consensual ways of thinking about truth and knowledge and replace them with the concept that there is a single objective and universal truth, that we learn about (and hence acquire knowledge) by virtue of the application of what came to be known as the scientific method.


Nevertheless the coherency or pragmatist theory of truth transpires to be a convenient theory for those advocating the acquisition of knowledge by machines in a way that rivals humans, because the way artificial intelligence algorithms "learn" things is essentially by scouring massive databases (also known as the world wide web or parts of it) for assertions and statements that it finds repeated, thereby excluding outliers - assertions that appear less frequently. Essentially therefore these knowledge algorithms are replicating a coherency theory of truth. If you ask ChatGPT to write you an essay with the title "What is Kant's transcedental deduction? Does it work?", you will not receive from the underlying artificial intelligence algorithm a textual analysis of the writings of Immanuel Kant together with a logical analysis of the argumentation; instead you will be given a piece of text that reflects a series of platitudes about the work of Immanuel Kant most commonly found upon a trawl of the world wide web. Although these two questions are admittedly amongst the most difficult issues in the entire history of philosophy, if not all human knowledge, and huge amounts have been written about them with no consensus emerging, it illustrates a point about the difference between the way humans (ought to) think and the way computers go about generating natural language text. An artificial intelligence algorithm will breezily generate text unhelpfully containing platitudes about Kant's transcendental deduction and what some people may or may not have thought about it; but it will not conduct analysis and it will not provide the reader with a persuasive conclusion. This is something that a human being well acquainted with this admittedly very difficult subject may be capable of doing; but a computer probably will never be able to do the same.


From this insight we might observe that artificial intelligence is contributing to a renaissance in the coherency / pragmatist theory of truth and knowledge, in which things are considered true because they are consistent with other things that are being said; and knowledge is treated as that which it is convenient to know. This approach towards truth and falsehood promotes a herd mentality, in which it is the volume of people advancing a particular point of view who are to prevail, or the number of times that one hears a specific opinion repeated. Social media, and the world wide web in general, promotes such a perspective by reason of the sheer volume of opinions, assertions and "factoids" (factual assertions that might appear intuitively obvious to some group or other but for which no coherent body of evidence is ever advanced and about which no sensible debate may be encountered) that the internet enables us all to publish. In such an environment, dissenters may be condemned or persecuted, even if they have valid arguments that will never be heard amidst the volume and melée of opinionated debaters in electronic environments.


There once was a time in which, to publish an opinion, you needed to find a publisher, either of a book or of an article, who would invest financially in your publication because they thought that others would pay to read it. Increasingly in the contemporary world there is no longer any such filter, because the costs of publication have been reduced virtually to zero and anyone can publish anything they want effectively without charge - including this article, the marginal costs of publication of which are zero and the costs for the reader in reading are also zero. Although this has introduced a certain democracy to academic thinking and the exploration of ideas, it has also developed an environment in which it is so easy to express one's views that many people do it. The more views that are expressed, the more society verges towards a coherency or pragmatist theory of truth and tacitly abandons the precepts of the Enlightenment era, in which rational and scientific methods of analysis to reach the objective truth are given primacy. Instead the herd mentality prevails, and the inclination becomes ever increasingly to assume that what is true is the opinion that prevails the most. Hence many websites (and social media is particularly bad for this) now publish the number of viewers of specific articles: as though the opinions of Donald Trump are particularly valuable because more than two million people have read them.


These are the consequences of the coherency theory of truth, and artificial intelligence algorithms, that purport to answer all range of different types of questions by reference to the materials they find on the world wide web, fortify this way of thinking in the psyche of people who use the internet as their principal source of acquiring knowledge about the external world: that is to say, all of us. At the same time we see a deterioration in the quality of journalism, which (because publication and readership of newspapers, magazines and the like has become virtually free) become reliant upon advertising for revenue; the number of people who read each article on a website can be counted; therefore journalists are assessed not for the quality of the articles they write but by the number of people who read them, because this is presumed (probably correctly) to be correlated with the number of people who may click on advertising links associated with those articles. It also has the consequence that published articles of all kinds are getting shorter, because people tend now to read a great deal of their news and other media on their mobile telephones or their laptop computers and they give their reading materials an ever shorter attention span. Contemporary professional journalists know that any article of more than around 700 to 1,000 words maximum is unlikely to be read by a typical member of the public. Hence abbreviation has become the norm. People rarely write comparatively long articles (such as this one) anymore, because it decreases the volume of people likely to read it and in the modern era, where the coherency theory of truth operates, volume of readership is essential.


Artificial intelligence is likely to increase the volume of text appearing on the world wide web exponentially over the course of the next few years, because no writing text about anything has become so easy: one simply sets the parameters for the text and the computer writes it within a few seconds. Due to the likely explosion in volume of text that will be spread across the internet about every conceivable subject, all written by computers, the coherency or pragmatist theory of truth will become ever more attractive in the public eye simply because there is so much more internet-searchable material likely to arise about any particular subject.


If one undertakes a Google search about a subject and the first ten search results all agree with one-another about a particular subject - something which artificial intelligence is likely to promote because by its very nature artificial intelligence algorithms process and regurgitate material they find on the internet about which there is substantial consensus - one will be inclined to think that this represents the truth purely by virtue of force of numbers. Artificial intelligence has not captured the concept of dissent, because it relies upon other sources for its materials and because artificial intelligence algorithms do not reason in the same way as human beings ought to (that is to say, by critical examination of assertions rather than just by weighing the quality of materials through reference to volume of writings that support a particular position), and artificial intelligence algorithms tend to work in generic platitudes. If you ask ChatGPT for an opinion on a contentious subject (e.g. “what do you think of the history and/or cultural attributes of nation X”), then it either ducks the question entirely or it provides a list that it attempts to render balanced of positive and negative qualities without engaging in analysis or providing a reasoned conclusion. Because computers cannot (yet) capture the way that human beings reason in response to facts and evidence, artificial intelligence is contributing to a cultural homogenisation of knowledge.


It is of course possible to write artificial intelligence algorithms that lie or defame people; one simply creates a database of false and defamatory assertions and ascribe them randomly (or not so randomly) to individuals in response to a user request to do so. However this is mindless, and what artificial intelligence algorithms cannot yet do is to disseminate between justified or grounded defamation of people and unjustified defamation. That is because they cannot reason or analyse defamatory assertions and assess whether there are genuine grounds for them; or whether they sound instinctively correct. Instead they can just work by reference to sources, saying e.g. “some people think that …” or “it has been reported that …” because they have found those sources on the internet. They cannot analyse whether the sources might be correct or otherwise, or weigh them up. A lot of internet publications have problems in this regard, and indeed it is ever more a problem in contemporary journalism in which the habit now is to report an allegation against a person and (sometimes) give them an opportunity to respond; but more rarely to engage in analytical comment and criticism (there are some rare exceptions). As ever more news and comment inevitably becomes written by artificial intelligence algorithms, this trend is surely likely to continue.


The propagation of artificial intelligence is a profoundly anti-Enlightenment phenomenon. It involves moving away from critical thinking and analysis of assertions, ideas, worldviews and assumptions, using the scientific method and deductive and inductive inferences, to assessing the value of a series of assertions by the frequency with which they are repeated and by explicitly or implicitly referencing sources that may not be reliable but without assessing their veracity in a systematic way. It is leading us towards a herd mentality of thinking about truth, increasingly prevalent in all societies in consequence of the propagation of news and information across the world wide web. It is unarguably a bad thing.


Finally, we might observe that natural language text written by artificial intelligence algorithms, by reason of its generic qualities, stating views and assertions without analysing or criticising them, becomes fairly easy to the experienced reader. Consider this article. It is full of assertions, some of them controversial, that are argued for in a mixture of sophisticated and more vernacular language. Some of the conclusions it posits are controversial, and are stated without hesitation or qualification. This is the way real human beings write; artificial intelligence algorithms do not. They cannot go out on a limb, because that is a uniquely human way of thinking: assessing the value and merits of arguments for oneself and not just following the opinions of the greater majority, wherever one may find them.


The Austrian logician Kurt Gödel, who in our series of articles we often refer to, had a lot to say about the concept of artificial intelligence and whether machines could think and act like humans. He was extremely sceptical of the idea, because he observed that whereas machines are bounded in the axioms to which they work, human thinking does not suffer from this limitation. That is why humans can engage in inductive inferencing of a speculative nature that enables them to reach conclusions outside their initial data set, whereas for all the machine learning algorithms in the world artificial intelligence software has an initial set of axioms about how it will operate and outside of which it cannot step. Human creativity, novel reasoning, critical analysis and the capacity to think and act independently and impartially are qualities of humans, that computers, by reason of their limitations within closed logical systems, will never be able to replicate. That is why it will always be possible for a skilled and attentive human to spot a string of sentences written by a machine, and why the Turing test will never be met.


Comments


bottom of page