Written evidence submitted by Dr David Harrison

 

 

 

 

 

Elective, Communal Self-Censorship on the Internet.

 

By Dr. David Harrison.

 

 

I used to write software when younger. Now I design solutions to tech problems. Elements of what is described below have appeared in online papers, but this text has not.

 

Solutions to the concerns raised about social media must be technologically and economically viable. There is no point in mandating something that will not work. State intervention should be proportionate, comparable to offline intervention, and should be done with consideration for any precedents being set and the dangers of unintended consequences. Activists’ solutions are rarely good solutions.

 

Centralised government or tech entity censorship will always be fairly clumsy. There will be many false positives and little chance of those unfairly blocked to clear their names. So-called ‘AI’ does not work well when dealing with natural language or contextualisation. And ‘one size’ does not fit all. Users are a diverse bunch and are each upset by different things.

 

Businesses using the Web 2.0 model, which is an important, core feature of our internet, cannot employ enough staff to check all posts. Even the Stasi were not able to employ enough people to spy on everyone else, and we should not be trying to emulate them. To impose centralised censorship in a Western democracy will normalise it globally. Permitting dictatorships to heavily censor the internet, silencing, de-platforming and criminalising their political opponents and critics, simply to stop individuals being called names online in Western democracies, is a terrible trade-off. The unintended consequences of government intervention can be brutal. Since the G7’s Financial Action Task Force ordered countries to implement oversight to reduce money laundering, some governments have been accused of using the legislation to crack down on NGOs that were critical of government policies, freezing accounts and imprisoning innocent people.

 

There is a better way, that squares the circle.

 

Firstly separate the three ages of users online: Pre-teen, teenager to the age of majority, and adult. Major social media services should be obliged to offer parental control features for those in the first two groups.

 

The same software can be used for all, but pre-adult accounts could be controlled and monitored using a parent’s ‘dashboard’ app/website. By default, the wider adult community would not be able to access content on these accounts or contact pre-adult users, whilst pre-teens and teenagers would only see content created by others in their age group. Celebrities, advertisers and other third parties could create content for pre-adults, but it would be heavily regulated. Parents could create exceptions, allowing children to interact with adult family members. I’d advise against a formal age verification system. Increased complexity rapidly decreases accessibility. Different apps could be used for each age group. Parents would be expected to take a lead, setting up appropriate accounts for their children and monitoring them, just as they feed them and clothe them. Monitoring online behaviour is now part of being a parent.

 

 

Elective, Communal Self-Censorship Filters,

 

Pre-adult accounts would receive carefully moderated advertising and would have basic censorship implemented for things like expletives and explicit content. Beyond this, and for all adult accounts, elective, communal self-censorship filters (ECSFs) would be used.

 

Each user (or their parent on their behalf), should they wish, could freely adopt one or more standard and/or third-party filters. These would filter all content that they see on a social media service. It could be extended to the internet as a whole by being implemented in a browser, but for now, this is for a single service (or multiple services, if compatibility was established).

 

These filters would block words, terms, specific types of content and specific users. So a user could tick the box to use a basic social media company expletive blocker and a second filter tailored specifically for their religious or political beliefs, maintained by the community that use it.

 

If content got through the filter, and a user felt that it should have been blocked, they could report it. Those moderating the filter (for a third party filter, other users) would check it, and could then rapidly block it (and perhaps the poster of it), from all users who have chosen to implement that filter. These filters need not be operated or moderated by the social media companies, although they could host a library of them.

 

This permits socio-cultural groups to each have an appropriate filter for their needs, with crowd-sourced monitoring of content. They may choose a Catholic one, or a Sunni one, or one that is particularly strong on blocking racial abuse. It also ensures that their filter can be language-specific.

 

This is much more flexible than any centralised censorship. You could not only choose the filters most suitable for you (or your child), you could create unique exceptions and inclusions for an account – an extension of the current options that allow users to block individual users or specific types of advert.

 

Those who do not wish to have their accounts censored, can freely access whatever is posted, and will not feel that their government or the social media company is interfering with their internet.

 

As this is self-censorship, it is compliant with the first amendment of the United States. This is important, as most major social media services are American in origin. ECSFs could be applied globally, reducing the toxic political friction that social media censorship has caused recently in the US. Democrat and Republican ECSFs could be offered to American users, reducing the level of abrasive and abusive interaction on social media. Each group would be able to block the posts of the other, and interact within their own communities, whilst still all using a single social media network and sharing benign content.

 

Over time, the ECSFs would improve, becoming more effective, and all users would see only the content that they wished to see.

 

Uploaders could also be asked to define and rate their own content using some simple tick boxes. This would be particularly useful in areas that have caused problems in the past, such as ‘artistic nudity’ (famous painting) or ‘medical nudity’ (breast cancer awareness).

 

The next generation of internet technology will involve distributed systems. Centralised censorship will not work well on distributed systems, but ECSFs would, as they are themselves an example of a distributed system in action. The concept is reasonably ‘future-proof’. Normalising the use of ECSFs now would see them carried over on to future distributed versions of the online services that we use today.

 

A rudimentary version of this is already in use. The ISP porn blockers offer an optional filter for those with children. It is relatively easy to block most porn sites as even basic word-matching can spot them. The context and oft-used terms are well known. For social media, the unlimited contexts require ECSFs.

 

Frequently mooted age verification systems for porn would be a disaster. They would push large numbers of people away from the major, well regulated sites into the darker corners of the net and force many people to hand over personal information that could then be hacked.

 

100% is not a goal. It is important to accept rough edges in a society that values its freedom. To be 100% watertight, any censorship or other restriction would have to be more strict than it is in China and use whitelisting. This is not desirable in a democratic society and would decimate the social, cultural, educational, economic and technological benefits of the internet.

 

Whitelisting of individual sites is not a viable solution for general web use, but could be used in devices and browsers intended for pre-teens, or perhaps in schools, which need only access a small number of sites and services. Even then, it would block legitimate use of lesser-used sites and new services, and it should be possible for exceptions to be requested, specifically for older years or individual projects.

 

The recently mooted idea for a tech company to scan unencrypted user data, Pegasus-style, is a shockingly bad one. It drives a tank through public expectations of privacy and must surely break data protection laws. The use of ‘AI’ with human checkers will also be unacceptable to most. Photos taken by parents of their children, selfies by teenagers at the beach, photos taken by patients to send to their GPs and intimate photos sent between partners could all end up being ‘checked’ by the staff of a technology company. This is a gross invasion of privacy and should be illegal.

 

Worse, this breaks trust in the security of operating systems, webmail and cloud services. Now aware that tech companies can do this, any regime not already mandating it will require that users’ PCs and devices are scanned for political dissent. It throws up a red flag to anyone working with material that requires secrecy. How can UK politicians conduct UK/US trade talks using PCs that run American operating systems, if the US government could mandate auto-scanning as a national security requirement? Anyone working on sensitive material would need to switch to a feature phone, and the government may want to hang on to their fax machines for a bit.

 

This sort of covert access to users’ data should not be possible without a court warrant, on an individual, case by case basis, and only by law enforcement.

 

This technology may also increase child abuse. A child is abused when CSAM is created, but not when material is shared, sold or ‘recycled’. Over time, a ‘bank’ of millions of CSAM images and videos has been built up. There are only so many hours in a day. The recycling of this extant material will be enough for many perverts. However, the technology recently mooted will render this entire bank of extant material too dangerous to own or share. That will leave a vacuum and increase the value and desirability of new, original content, that has not been seen by law enforcement or the tech companies – created to order, ‘single use’ CSAM. This fresh ‘internet safe’ material can only be created by abusing children. I am considerably more worried about the abusive creation of new content than the circulation of recycled content. Cracking down on the latter may increase the former.

 

Very strong restrictions to prevent criminal behaviour will always be evaded by criminals, but would damage the utility of the internet for the vast majority of users, who have no criminal intent. If any more layers of identity verification are added to the process of accessing online services such a payment and banking, large numbers of users will find them too cumbersome to use. Two-factor authentication already causes problems for some users if it is required too frequently. Security should be proportionate even during the moral panics we seem to find ourselves in today.

 

VPNs should not be criminalised. They offer increased security for users. Banning them would seal off those trapped inside dictatorships, locking the doors and windows of the prison created by abusive regimes.

 

It is unwise to mandate specifics for technology in legislation. Technology changes rapidly. A badly worded line in a piece of legislation may isolate the UK from developing or adopting new technologies. Be vague in legislation and work with the major tech companies on specifics.

 

In the late 1990s South Korea mandated the use of Microsoft’s ActiveX for online services. It was unreliable and restrictive, and when better alternatives were developed, Korean users could not easily switch to them.

 

Some nations ban peer-to-peer technologies, as they have been associated with copyright infringement in the past and because it is harder for governments to spy upon. Peer-to-peer is fundamental to the future of distributed systems, which will be the next major advance in technology and internet services. It will be particularly important in the provision of resilient communications in disaster zones. As climate change worsens, such technologies will be a necessity.

 

There is no reputable way to block what is termed ‘disinformation’ or ‘fake news’, online, or off it, in a society that values freedom of speech as we do. The UK has not pre-licensed print since 1696 and should not attempt autocratic censorship online in a knee-jerk reaction to current issues. We have laws for such things as incitement to racial hatred that can be applied online, but a crime should be committed before action is taken.

 

Donald Trump has proved beyond reasonable doubt that one person’s truth is another’s ‘fake news’. Nothing supplies credence to a conspiracy theory quite like government censorship or prosecution. We do not arrest people for talking rubbish in pubs. We should not arrest them for talking rubbish online. There are much better ways to deal with ‘vaccine hesitancy’ than dragging civil rights back to the dark ages.

 

We should not be copying China, nor treating Orwell’s ‘Nineteen Eighty-Four’ as a manual. A democratic society should walk away from bouts of moral panic or populist/activist demands to censor, and consider the bigger picture. The #MeToo movement was important and offers a benchmark for any future legislation: if it would have prevented the #MeToo movement from growing, it is a bad idea and should not be made law.

 

The shift to the internet is the largest social change since the invention of the printing press, and has happened very quickly. People change more slowly. The transition period will be a little bumpy. It is important to legislate with a view to permitting the future to happen and helping people to migrate to it, rather than repeatedly trying to prevent the future from happening, because it is different, we do not all understand it, and we are fearful of change. We will adapt, eventually, perhaps generationally, and reap the rewards.

 

Despite a public service remit, the BBC has produced few TV programmes in plain English to help the less computer literate become more confident in their use of technology, or to assist parents in moderating their children’s use of it. They have happily run endless scare stories as news, spreading fear of the dangers of the internet and demonising technology as a threat to society, women and children. They should make more of an effort to explain and support technology use and spend less time making people fear it.

 

Encryption is an example of something we need to accommodate rather than fear. The Ministry of Defence could safely store all of their most sensitive files on a Chinese-owned server in Beijing as long as their data was encrypted. Strong/end-to-end encryption is vital for all secure internet services, and yet politicians still attack it and demand ‘back-doors’ in software for law enforcement. If you place a ‘back-door’ in a system, you cannot restrict it to law enforcement. Banning strong/end-to-end encryption is like ordering everyone to leave the front door of their home unlocked in the name of national security. We need to develop law enforcement procedures that work in a world with strong/end-to-end encryption and globalised cloud storage.

 

Technology is blamed for many things, but it is simply a tool, like a hammer. You can use a hammer to build something, or you can use it as a weapon. We do not ban hammers or punish those who manufacture them, because some people use them to hurt people. Legislative action should not sanction technology or those who produce it, but those who use it for criminal or anti-social purposes.