What hath God wrought?
Samuel Moore, inventor of the telegraph (or telegram)
First ever telegraph message, 24 May 1844
+++++
Let us begin by defining a perfectly transparent environment. What we are talking about is a theoretical artifice in which all communications are perfectly transparent. This means that:
every person has the ability simultaneously to listen to and amend delete or manufacture the communications they are making, receiving and that every other person is making and receiving (we will call this "content transparency"); and
every person has the capacity to imitate any author of any communication (we will call this "identity transparency").
A perfectly transparent environment is obviously something highly dystopian; the problem is that in contemporary electronic communications it is precisely what we are heading towards. In order to understand what has happened, we need to begin with a very short history of language and communication, and then to trace and understand the direction in which we have gone since then.
Before Alexander Graham Bell and the advent of the modern telephone, there were two ways that human beings could communicate using language and they had created specialised methods for doing these things. One was to talk in person. You would meet a person and you would speak to them and you would, in most cases, see them. Absent masks, make-up and disguises (techniques that all have all for the most part disappeared recently and we will discuss why below), the problem of identity transparency did not exist in the case of face-to-face meetings (and these days the problem is ever less pertinent for face-to-face meetings, precisely because the foregoing skills are less prevalent). Where you meet a person in the face, the problems of identity transparency are of a whole order of magnitude less severe either than in the modern world or in the other pre-telephone method of communication, the written letter or other communication using writing or symbols.
Letters were originally handwritten and this remained so even beyond the advent of the modern typewriter the first of which was patented in 1866. (Typewriters remained too clumsy and confusing for common non-professional usage more or less throughout their natural lives, and it was not until the advent and common usage of email in the early 1990's and beyond that handwritten letters died out.) The problem of identity transparency was ameliorated to a degree by the fact that handwriting had to be forged; and a profession of handwriting forgers duly emerged, in particular when the ink signature become prevalent as a tool of identity confirmation in connection with banking transactions. (One would sign a cheque to confirm the authenticity of the account owner's instructions to their bank to wire a sum of money, a practice that remains substantially in place to this day in the United States of America but not in many other places.) So professionals emerged who could forge signatures, and this resulted in criminalisation of forgery, and so on and so forth. The profession of handwriting or signature forgery has also diminished, just as has that of devising masks and disguises, again for reasons we will discuss below.
Mechanical or electronic written communications increased the possibility of identity forgery as those methods developed in the nineteenth century. The problem with the telegraph, a system of communicating letters down an electric wire by translating them into Morse code (a series of dots and dashes that could be tapped out into pulses in an electrical wire by way of a machine), was that you had no idea who was necessarily at the sending (or the receiving) end of the machine. There would be telegraph companies, typically associated with railroad companies (as the telegraph wires often followed the railroads across the continental United States where much of this technology was pioneered), who would be assumed to have reliable employees; but as was a common theme in some American "Western" movies that purported to describe life in frontier territories of the western United States of America in the 1800's, telegraph machines might be expropriated by bandits or other undesirables to send out false messages. So problems of identity in electronic communications stretched back at least that far.
The telephone created a new set of issues. In one sense the telephone reduced problems of identity fraud or mistake, and this remains so to the present day. That is because if one knows the voice of one's telephonic interlocutor, one can recognise whether a person is who they says they are. The problem arises with the telephone where a person whose voice one has never heard before places a call and says they are someone and the question arises as to whether they really are the person they say they are. Contemporary telephone banking fraud procedures are focused on trying to avoid identity frauds of this kind as vulnerable people (or even people not paying obsessive attention to identity security problems, which for the greater majority of the population is not a crime or even a sin) may be scammed into handing over sensitive personal or financial details by a fraudster pretending to be a staff member in a trusted bank or other typically reputable institution.
The problems really arose relating to transparent environments with the email. The first email was sent in 1971 in the context of early computer engineering, although email did not really enter common public currency as a means of communication until the early 1990's. The concept underlying the email remains much the same today as it did when it was invented. Each user has an email address, of the form "xxxxx@xxxx.xxxx" (this format has remain unchanged throughout over fifty years of usage), used to identify themselves and the intended recipient(s) of the communication. The communication has a series of ASCII characters, although as time has gone on electronic documents have been able to be attached to emails, as well as graphics inserted and various other elaborations upon the concept. The entire communication is translated into a standardised format, including "metadata" (which describes the electronic location of the sender, the intended recipient, the contents of the email and attachments, and other pieces of pertinent information) and this formatted data is transmitted into a globally interconnected set of computers (for the most part connected in the modern era by a set of fibre-optic cables) that makes its way, one hopes, to the intended recipient by virtue of the content of the metadata providing sufficient indication to the various computers through which it passes which way it ought to go.
The problem with this system, which sounds so extremely simple (like all good ideas), is that it can of course be intercepted. Just as a piece of mail can be intercepted by a postman with malign intentions, an email can be intercepted by a computer coder with malign intentions who happens to control one of the computers through which the formatted electronic message passes. That malicious coder might change either the metadata describing the intended recipient of the email, so that the email goes to the wrong place or appears to have come from someone it did not really come from; or, potentially even worse (and this is where problems of content transparency become particularly acute in the modern electronic context), change the content of the communication so that the recipient of the email imagines that the sender of the communication (if that itself has not been the subject of fraud, which often it is simultaneously - we sometimes call this "Spam") has sent some communication with content that does not represent what the sender really intended to say.
The rise of encryption technology
One of the first innovations that arose in consequence of the inherent fragility of email as a form of electronic communication was the use of the designated "server", that is to say a specific computer or set of connected computers to which all email with specific metadata would be directed. The "server" system would be the beneficiary of a series of IT security measures, such as physical and electronic robustness of the computer architecture in question, and would be responsible for cross-checking the content of the email data using checksum equations and other mathematical methods and would then send out the email data in the right direction once everything had been checked as legitimate. The receiving email software, whether incorporated in a world wide World Wide Web browser or in specific software installed upon a computer, would repeat a checksum analysis of the incoming email data to ratify its authenticity and the recipient could have reasonable confidence in the content of an email (s)he received and the identity of the purported sender.
One of the first email servers popular with the general public was called Hotmail, established in 1996, although a number of academic, government and commercial institutions established their own servers prior to then. Each server, at this stage, was literally a series of physical computers installed in a university or office premises. These were the days before everything had gone virtual.
Before we come to the problem of what was wrong with the server approach, let us mention the logical conclusion of the checksum approach to maintaining the integrity of emails, which turned out to be a level of encryption so sophisticated that no computer could decrypt it. This was called "Pretty Good Privacy" (PGP), first developed in 1991 and subsequently enhanced and we now know this level of encryption as 256-bit encryption, a phrase commonly bandied around in popular culture to persuade the general public that their electronic communications are secure. Although it is in theory true that 256-bit PGP is indecipherable, that turned out precisely to be its downfall. It was so effective a method of encryption that it could be used by criminals to convey messages the purpose of which was to conspire to commit crimes, and law enforcement authorities found themselves frustrated by this. It was insufferable to contemporary law enforcement authorities that all of a sudden contemporary technology had permitted criminals, terrorists and worse to devise means of communicating with one-another to perpetrate their misdeeds and there was nothing that could be done by lawful government authorities to intercept and prevent this. Contemporary electronic technology had suddenly become a monster, enhancing the ease with which very serious crimes could be committed by some of society's most sinister vagabonds.
What governments therefore did, to overcome the use of privacy and server technologies to commit crimes secretly, was to (more or less privately) invite email service providers and email software publishers to provide what became known as "back doors" into their software and messaging services. This meant that governmental authorities could, if they suspected the use of electronic communications for the commission of crimes or other activities against the public good (for example, espionage), surveil those electronic communications to achieve law enforcement objectives.
In principle this was all perfectly laudable, as long as it was done with legitimate goals in mind and within a legal framework that balanced the needs of law enforcement with appropriate standards of civil liberties so that governments would not just be reading people's communications out of sheer nosiness or with some sinister objective in mind, as when the governments of totalitarian countries may engage in the surveillance of lawful communications about lawful and legitimate political activities in order to suppress political activities and/or legitimate politicians. Unfortunately in 2013 a US government contractor called Edward Snowden decided to leak the details of one such scheme to the world's media whereupon he was indicted for violations of the Espionage Act and fled to Russia. The problem with Snowden's actions was not just that they were against the law. What he'd done was to explain to the world's criminals, dictators, terrorists and other malfeasors the structure of a legitimate surveillance architecture designed to detect and prevent serious crime. In other words he gave a free pass to the world's bad guys. This is presumably the reason why he was never pardoned. What he did, although he may have imagined he was doing it in the pursuit of some higher ideals related to civil liberties, was really unforgivable.
The consequence of Snowden's revelations was that all sorts of people learned how to use legitimate government backdoors into electronic security systems for illegitimate purposes - including of course the Russian government, which is presumably why they granted him asylum. They were the big winners from all of this, because they learned how to break into western electronic security systems and as by now has become all too clear, it is not a good thing for the Russian Government to know how to do this. Nevertheless this is one of those cats that, once let out of the bag, cannot be put back in. The net result is that the software coding necessary to intervene in ostensibly secure electronic communications systems, including email, is now commonly available and not just to governments (whether those friendly to the West or otherwise) but to all sorts of other people to with the financial resources of political connections to acquire the necessary expertise.
The rise of the Smartphone
Mobile telephone technology had been around since 1973, although for a long time mobile telephones were large, bulky devices with shaky battery lives known for a substantial period as "carphones" because they were sufficiently large that you could only credibly use them if you had them installed in your car. They started to become popular as handheld devices carried in the pocket in the 1990's, and they were limited to wireless telephony via a series of mobile telephone masts. Early mobile telephone networks competed as to the density of masts in different regions of a country, something unthinkable today as mobile phone masts have dramatically reduced in size and have become ubiquitous.
Early mobile telephones also simple unencrypted text messages originally intended for network companies to send service messages called SMS's, although the mobile telephone companies soon realised there was a potentially enormous market in permitting customers to send one-another messages as an alternative to calling each other. (This led to the development of what we now call "instant messaging", which again is ubiquitous). SMS's, because they were unencrypted, suffered from all the security fragilities of early emails: it was (and remains) possible for anyone to send an SMS to any mobile telephone with any content appearing to come from any other mobile phone number, and indeed there are plentiful websites, pieces of software, mobile phone "Apps" and the like that permit one to do precisely that.
From around 2001 the first so-called "3G" networks arose, which were mobile telephony networks with sufficient bandwidth to permit the carriage of data so that mobile telephone users with the appropriate software installed could send emails, photographs, encrypted messages, graphical images (what we now call Emojis) and all sorts of other electronic data across mobile telephone networks. Mobile telephone technology also developed so that mobile telephones small enough to fit in one's pocket could carry multiple complex pieces of software (what we now call Apps) and indeed mobile telephones became as sophisticated as modern laptop and desktop computers, with their own operating systems, sophisticated central processing units, memory stores, camera lenses and ever more.
All this however introduced new frailties into the architecture of contemporary electronic security, because mobile telephones became so sophisticated that no single programmer or team of programmers could say anymore that they understood how all the different pieces of software on a modern mobile phone worked our how they interacted with one another. This gave hackers renewed opportunities, that in reality had always existed with imperfectly written computer software, to find ways to exploit mistakes in the coding of software to infiltrate the operation of a modern mobile telephone and to get the device to do things its user does not want it to do, such as turn itself off or on at will, take photos or video without the user knowing, record audio, transmit that data elsewhere, expropriate the confidential personal information (such as a person's address book) to third parties, and so on and so forth.
Moreover the possibilities for hacking become exponentially augmented when there are lots of different complex pieces of software installed on a powerful computational device, each of which contains different coding lacunae and where nobody has really given much thought as to how these different pieces of software with all their various flaws and mistakes interact one with the other.
Once hackers had learned how to do all these things - and some of those hackers were employed by government for legitimate purposes; some were employed by criminals for illegitimate purposes; and some undertook their hacking activities to expose risks and frailties for the public benefit ("white hat hackers") - it was inevitably only a matter of time before these various hacking techniques become public and at least reasonably common knowledge. Anyone who imagines that it is not fairly straightforward for an experienced IT expert to penetrate a modern mobile telephone is naive.
The Apple anti-trust litigation
One particularly successful mobile telephone manufacturer called Apple tried to resolve this problem by limiting the pieces of software that could be downloaded onto its devices to those on prescribed lists. That way, Apple could check the integrity of the software installed on its devices. In consideration of doing this, Apple charged and charges an administrative fee to the approved software downloaders which is generally a proportion of the fee they charge to consumers. All purchases or other downloads of software onto an Apple smartphone must go through an Apple-written piece of software called "iStore".
There were two problems with this idea, both of which appear to be insurmountable and hence the concept should probably be abandoned. Firstly, it missed the boat. Hackers had already found out how to get into iPhones (Apple's brand of Smartphone) and those techniques are now widely known and can be bought for a fee. Apple smartphones are generally regarded within the electronic security community as somewhat more secure than other smartphones (the greater majority of which use a common operating system called Android, which is full of security flaws too) but the difference is marginal as Apple's smartphone operating system, iOS, has a series of well-known security flaws and anyway the vetting procedures for new Apps is imperfect (as is any vetting process) and relies more upon vendor market reputation than actual code analysis.
The second problem is a legal one: tying the purchaser of a hardware product to specific software products associated with the vendor of the hardware violates a principle of anti-trust / competition law against anti-competitive agreements because they create de facto monopolies for the tied subsidiary commodity (in this case the software) and hence monopoly pricing which is inefficient. The point is this: if there are A, B, C and D instant messaging services available on the market and Apple lets you buy only A, B and C products, then the prices of the monopoly commodities (here A, B and C) will be higher because there is less competition. Also innovation is stifled, as A, B and C know they have the market captured and hence there is no incentive for E, F and G to come along with superior products - including products superior in security features. So the whole exercise may turn out to be counter-productive if the goal is that which Apple purports it to be, namely protecting the security integrity of their mobile hardware.
Put in slightly more layman's terms, the point is that if I buy an Apple telephone then who is Apple or anyone else to tell me what software to install upon it? That is up to me. If I am concerned about IT security then I am free to go away and research for myself which pieces of software are more secure and which are less so. Apple has no business preventing me from undertaking this market exercise and even less business charging a fee (which will eventually be passed onto the ultimate consumer) for preventing me from exercising my economic and consumers' rights. Nor is there any a priori reason to believe that Apple is any better an assessor of security integrity in general than are the general public. The methods Apple uses are secretive, as is often the case with corporate methods. Consumers may form their own conclusions about the relative security qualities of products they are purchasing and installing on their phones, and this is presumed to be the way that transparent markets work. The general presumption of free market economics is that the best assessor of the quality of a product is the consumer who will be using it, not a monolithic corporation with a dominant market position up the chain of supply nor indeed government. That is why we have anti-trust laws: to break up cartel or other arrangements that impede free consumer choice.
Perfectly transparent environments
We started this essay by defining a perfectly transparent environment as one in which there can be no security as to content of a communication (a communication might be intercepted and amended in the context of communication from one person to another) and no security as to the identity of the sender nor indeed the recipient (a sender cannot be certain that his or her communication will reach the recipient and the recipient cannot be certain that a message is from the purported sender). The underlying thesis of this essay has been that increasing electronic advancements have rendered communications ever more transparent, and we have sought to illustrate that the advent of email and electronic instant messages are part of the trend towards increasing transparency of communications.
The problem with electronic communications - and this goes back as far as the telegraph - is that because they all ultimately boil down to electronic pulses down wires, just as did telegraphs, they can be intercepted. The identity of the individual sending the message may be assumed, or the electrical pulses may purport to identify the person in question. Or the identity of the individual receiving the message may be assumed but that person may never receive the message; someone else may do. Or the electrical pulses may be intercepted and their contents changed. The net result is ever-increasing uncertainty as to the veracity of communications, and technology has not so far managed to provide a solution to the problems of content and identity interference.
Encryption technology has not managed to find a solution to these problems, both because there are always ways around encryption - and it is impossible in practice to keep these methods to a privileged group (once governments know what they are, eventually everyone else will too) and because modern computer software has grown so complicated that nobody really knows how all its components interact anymore and hence it is increasingly full of security flaws.
All this suggests that in seeking to communicate reliably and confidentially we may in fact wish to consider technological reversal: going back to older methods of communications that people have lost the requisite skills to interfere with as skills with technology have superseded them. Few people know how to forge handwriting or signatures anymore. It is a lost art. Likewise, relatively few people know how to steam open a seal on an envelope, whereas this used to be commonplace. Because these skills have to an extent been lost, they make methods of communication such as handwritten notes and letters put in the mail more secure than they used to be and very arguably much more secure than modern emails and instant messages that could be garbled as to identity of the parties or as to the contents at any instant. In the fog of exponentially ever more complicated contemporary technology, the full extent of which ever fewer of us really fully understand (if any of us do), we may actually want to consider abandoning a lot of it.
This author invites the reader to spend a few days without a mobile telephone. It is an eminently possible thing to do. It takes a couple of days to get used to it, and then you start looking at the real world again and talking to people in person and creating relationships with individuals more closely and with greater attention, rather than dealing with communications with them just by sending them electronic messages. While electronic messages appear prima facie convenient, they end up being bewildering and replete with the possibility for deceit. The other thing we all ought to get used to more is actually having telephone calls, where we can hear one another’s voices. Although with the advent of Chatbots, artificial intelligence algorithms can increasingly effectively imitate human writing, they cannot yet imitate human voices. You can reliably identify someone you know by speaking to them on the telephone and hearing the distinctive intonations in their voices and specific words that person is habitually likely to use. Telephones are not as good as communications in person; but communications in person are not always possible. Telephone calls can be tapped; but again that is a skill that has to some extent died and in many countries now an analogue telephone call may be more secure than an electronic one (what is known as a “VOIP call”).
Technological regression is one method we might use to defeat those who would attempt to intervene in our communications with a view to defrauding us or compromising the confidentiality of communications we wish to have with one-another. A perfectly transparent communications environment is truly a dystopia, in which we all go insane because we do not know what we are saying to each other or who we are saying it to. Humans thrive upon communications and the use of language to convey their ideas to one-another. We cannot subsist as humans at all unless we are able to communicate effectively. Technological advancement has reached a point that it is now restricting our ability to communicate, and hence it is contributing to a dystopian future. This is the essence of what was once known as the post-structuralist hypothesis: human society is advancing, but towards some fundamentally incoherent or corrupted goal. We can only prevent this, as the human race, if we take a grip on the problem ourselves; governments cannot do it for us. In many ways technology has unleashed a series of monsters and one of the challenges for this and for future generations is to engage in a constant exercise in self-restraint in preventing technology from making our lives worse rather than better.
Electronic communications contribute towards a future dystopia in which machines govern humans rather than the other way round. We have the power to prevent this. Let’s start talking to each other more, rather than to those machines we all carry around in our pockets.
Comments