Imagine the following purely theoretical hypotheses. (We are not actually advocating that anyone does any of the things we describe below; they are a thought experiment to get us thinking about regulation of social media by government).
I set up a Twitter account in the name of 'Joe Bloggs'. My Twitter feed involves a series of posts inviting people to buy heroin off me. e.g. 'Hsroin for sale; write with delivery address and quantity of pure heroin desired in grams; we will revert with price and wire instructions. 7-14 day delivery window.'
What happens next? We really don't know. Will Twitter spot the word 'heroin' and refer the matter to a moderator who will make a personal determination as to whether the post meets Twitter's policy on 'illegal or certain regulated goods and services'. (a copy of which appears here)? In that case, can't we just evade the auto-filter with the letter 'H' or the word 'Brown'? (Here is a stupid list of slang terms for heroin; there are an awful lot of them.)
Are we expecting Twitter algorithms to single out for human review all draft Tweets with these various slang terms, many of which are common and inoffensive terms in the English language? This sounds impossible.
In fact it appears that Twitter never or very seldom undertakes prepublication review of Tweets. Here is the important phrase in their policy: "In addition to reports received, we proactively surface activity that may violate this policy for human review." Now we have no idea what the verb 'to surface' means, except in the context of submarines and maybe some sorts of other objects submerged in water - which Tweets are not. Nor is there a rational analogue. In other words, Twitter policy is what is technically known as bullshit: nonsense intended to deceive.
What the policy really means is that Twitter will look at individual complaints about Tweets: from time to time it may run a word scanner across a proportion of Tweets for particularly egregious or offensive words (e.g. 'fucking'; 'rape', etcetera) and single out some Tweets for individual review; but that is about it.
The problem is that this is simply not enough. We have already told the reader how to set up a heroin sale usergroup or network without attracting the attention of the Twitter moderstors. This should not be possible; but it is.
In response to one cocky interlocutor, who commented 'have you ever seen an advertisement for heroin on Twitter?', the answer is: of course not. That is becaus this author does not take heroin and does not go looking for such things. But Twitter is full of all sorts of hidden secrets cloaked in euphemistic language. That is one reason why members of intelligence services like it (along with LinkedIn messaging): it provides an unlimited opportunity to communicate anonymously usong codes. If the heroin dealers of the world have not yet cottoned onto this curious feature of Twitter, we would be very surprised.
Other social media applications, such as WhatsApp and Viber, permit the sending of messages to huge numbers of people in encrypted form. Certainly no human being in the social media company is reviewing any of these; they are end-to-end encrypted (so it is advertised) and hence the software managers cannot read them.
Imagine a publisher of regular (paper) books who is asked to publish a book on how to assassinate your wife (or husband).it might be called 'Violent assassinations against your spouse: a practical guide.' This would be an abominable social outrage and the publisher ought to bear (severe) criminal / legal culpability, arguably including a substantial period of custody. It can be no defence that the publisher never read the book before publishing it and acted only upon a complaint of a consumer.
So it must be with electronic media. Social. Media platforms that publish outrageous and illegal material should bear legal liability just as any other publisher. Only this way do they have a legal incentive to do any more than 'surfacing'. That is our argument for government regulation of social media.
There is another issue: the identity of the ultimate author. Virtually all social media platforms permit people to post materials anonymously. You may need to provide a valid mobile phone number, but these can be purchased anonymously for 5EUR at corner kiosks in 95 per cent of the world's countries.
It is no good at all if people can anonymously post things on social media that incite people to commit serious crimes. It is not just a matter of removing the material and sanctioning the social media software company. The most culpable perpetrator is the author, and there should be a means of going after him or her using legal tools. By reason of social media's anonymity, this is often not possible at the current time.
Social media companies do not keep proper records of who their customers are - what banks and law firms call 'Know Your Client' documents. This typically consists of a passport copy and a copy of an official document confirming address. Such documents are held under circumstances of strictest confidentiality (they cannot be released to anyobe without court order); IT security measures are undertaken to ensure the security of KYC files. But it does mean that if the customer (whether of a bank or a law firm) uses that relationship to commit crimes, it is ultimately possible to find out who the person committing the crimes is.
Social media has none of this. It is not possible to find with certainty in the social media company's records KYC files and the ultimate perpetrator. You can guess, using things like IP addresses where they are collected and kept; but IP addresses can also be faked or disguised.
Hence social media companies should be regulated to the extent that they keep adequate KYC files on their customers, paying or otherwise, so as to deter their use for crime and also to ensure that where they are used for crimes, the ultimate perpetrators csn be better identified.
The principle of requiring a mobile phone number is already a concession to KYC (and the clever amongst us know the social mediak platforms that do not need that - they are the spies' favourites, naturally). But mobile phone numbers is bad KYC because anonymity is so easy to achieve with mobile phone numbers. Even more absurd, you can set up a mobile phone that relates to a variety of different mobile phone numbers but none of those numbers have SIM's in the phone or even SIM's that are operational. Proper docunent-based KYC, introduced gradually,is the next proper logical step.
There remain problems. What if we trace lots of social media accounts proffering illegal content to mysterious addresses in Russia? Well, if that does happen (and surely it would if the system were working properly) we start by banning those specific addresses and identities from using social media (a government regulator can draw up lists) and then ultimately we take the sanction of banning the entire country from social media because of the dangerous menace that country is perpetrating.
It would be a step-by-step process - not an indiscrininate sanction but a road to go down in order to incenrivise countries such as Russia themselves properly to regulate social media content produced in their jurisdictions.
Consider RT, the well-known English language Russian government system broadcaster. We do not ban that, even though the views expressed on it may be disagreeable. But the managers and proprietors of RT know they would be banned if they started disseminating materials inciting crimes. So that is where the line is drawn. Likewise can it be drawn with social media.
We will also need to deal with the issue of proxies (e.g. Panamanian villagers who don't speak English but with very active Twitter feeds full of pro- Russian propaganda). In principle proxies should be banned unless their origins are revealed pursuant to a specific set of disclosure protocols. It is easy to detect proxies, by the way: modern computing algorithms can do that.
Now we turn to the question of what our legal regulator would look like.
A global regulator is a UN pipe dream of inevitable corruption and political deals in smoke-filled rooms. We should not allow ourselves to go down that route.
Instead we can have national regulators applying common principles and standards of regulation. Where we have those, they can cooperate with foreign regulatory counterparts they seem to be of the same standards - or even merge with them.
Here are the basic elements of social media regulation.
KYC procedures mandatory for all social media subscribers.
A right to respond for those defamed.
In outrageous or unsustainable cases of defamation, the regulator may make orders destroying or banning permanently the offensive material.
Policing materials that ihcite others to commit crimes should be for the individual social media providers. However they will be held fairly strictly liable for their oversights. So if I am a victim of an act of violence unlawfully incited on a social media website, I can sue - as well as the social media site being sanctioned within regulatory or criminal parameters.
Enforcement by victims is generally the way to go to avoid everybody from having to read everything. This may involve complaints to and claims for compensation to the regulator against specific social media companies; requests for disclosure of ,KYC records for errant accounts; and ultimately access to the courts for injunctive relief if nothing else works.
Would all this serve as an icy hand on entrepreneurship? Not at all; no more than defamation and licensing laws apply to print media and television media respectively.
Social media needs regulating because it is manifestly not self:regulating. Indeed there is no reason why it should be.
It is a legitimate function of government to regulate potential (even unwitting) intermediaries to the commkiission of serious crimes and other wrongdoings. Social media would be massively improved as a result of relatively modest government expenditure.
The PALADINS. We are here to serve.