In a speech to the UN today, the Prime Minister of England Theresa May called upon governments and industry to ‘go further and faster’ to fight the “terrorist use of the internet” by reducing the time taken to remove any terrorist material posted online and for tech’ companies to increase their efforts to stop such content being uploaded in the first place (such material being for example, videos showing how to make a bomb). Modern tech’ companies use software algorithms to identify material which may either break the terms of use of the site itself or be otherwise criminal – however these safeguards are not 100% and result in such inappropriate material being available online on many occasions.
According to Kent Walker (General Counsel for Google) however, the machine-learning at the heart of these safeguards is, despite being a powerful tool “…still relatively early in its evolution” and the identification of “…the difference between bomb instructions and something that might look similar but be perfectly legal…is a real challenge.” The inference here is that the ‘communications platforms’ on behalf of whom Kent Walker is speaking, consider that they cannot and should not be held responsible for the fact that such content is posted. When asked whether or not he believed that regulation would help, Kent Walker also stated that he believes that “The challenge is not one of liability, it is the problem of getting the analysis right and the fast enough at a scale of billions of appearances on the web a day; and in the face of thousands and in some cases millions of people who are posting material that is potentially problematic.”
Really? The problem is that these communications platforms cannot recognise what they refer to as problematic material quickly enough… after they have released it to the public? Kent Walker is effectively saying that this is a technological issue, and that it will be resolved ‘just as soon as machines are good and fast enough’. But this is NOT a technological problem: if you do not wish to allow something for publication, then you can prevent publication; publication is a deliberate act, it is not something which occurs outside of our control. Twitter, Instagram and YouTube all provide platforms which allow One-to-Many communication, and all of them can be turned off. (or can implement a checking service pre-publication).
Surely the question here however, is who is responsible for the content which is broadcast; the content provider or the broadcast agency? If the responsibility lies with the content provider (which in the case of Twitter is the individual), then we need to find a way in which the content provider can be held accountable for any inappropriate content. If however the responsibility lies with the broadcaster – as it already does with more traditional mediums such as television – then surely it is up to the broadcaster (in this case Google) to ensure the acceptability of that content. The advent of broadcasting via radio, television and cinema in the early 20th century also saw the advent of broadcasting regulation and legislation (for example the Radio Act in the United States of America in 1927). This set of regulation and legislation still exists today and companies which broadcast communications which breach these regulations and laws are held to account. Why then can a similar level of accountability not be applied to the modern ‘communication platforms’?
In the current situation, these communication platforms accept that they cannot maintain the desired level of service or adhere to the relevant regulations, hence they compensate for the technical limitations and rely also upon assistance from unpaid users: for example, the Metropolitan Police in the UK has a unit which refers material to these communication platforms which they believe represent a breach of the rules. It is worth mentioning that these communication platforms cannot even effectively enforce their own rules – let alone government legislation – without help: the content to which we here refer sometimes breaches the terms of use of the services themselves and yet there is still a heavy reliance upon users and ‘trusted flaggers’ to draw the company’s attention to such breaches. So if they cannot maintain the necessary standards of service, why are they still allowed to continue doing business?
Kent Walker indicates that he believes that this problem is “…a shared responsibility”, insisting that they “…do everything we can to identify and respond quickly once we are aware of this material, but the larger problem is you can’t necessarily catch everything on the entirety of the internet…” going on to say that “it is challenging for any communications platform to identify everything that is travelling over its lines and make very fine-grain and sophisticated analysis of what’s appropriate and what’s inappropriate and what’s potentially illegal.” The point is however, that it is not a ‘shared responsibility’ to monitor conformity; as an individual I have no control over Google, its policies or its working practices. It is not the responsibility of the ‘viewer’ to ensure that the information being broadcast is appropriate. These communications platforms permit (in many cases) content to be provided in an anonymous manner, a feature which renders it very difficult (if not impossible) to trace and prosecute the content provider after the content has been broadcast. Under conditions such as these, the broadcaster cannot absolve itself of responsibility, as the broadcaster becomes the only way to prevent content from being published – the only way in which any standards can be upheld.
Perhaps Kent is right to some degree however, perhaps the answer is not legislation; perhaps the answer is enforcement, and maybe we should start to enforce the legislation that already exists. If companies such as YouTube and Twitter are broadcasting offensive material, bomb-making instructions, death-threats and the like, then they should be stopped, immediately (if that is the common standard).
It is not the fault of technology that we broadcast offensive material – it is our fault.
Inappropriateness is, like beauty, in the eye of the beholder, so banning everything WE don’t like is both unlikely and probably equally politically inappropriate.
Censorship of a similar kind already exist in print and commercial television, advertisers will cancel the revenue stream for content their customers (or shareholders) don’t like.
So, is it possible to insult any of these (effectively free riding tech companies) on their own sites, any more than writing say “Murdoch is a moron” to the letters page of the Sun or the Times?