{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Ian Tighewritten evidence (FEO0021)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

1)           Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

Yes. Freedom of expression is under threat. Recent events demonstrate how messages that do not fit a particular narrative are rejected and users banned. We may not like the message but everyone has a right to speak freely. Voltaire paraphrased. Alignment around political beliefs and persuasions increasingly dominate the narrative and this leads to a segregation of the public into silos of thought and an inferior exchange of views. This closes down discussion, creates barriers and harms people’s ability to survey all the facts and opinions.

 

2)           How should good digital citizenship be promoted? How can education help?

 

My concern with any form of education re citizenry would be very similar to that I feel about EU Erasmus. In the wrong hands with the wrong messages we could see at least subliminal messaging in an attempt to shape thinking along political lines.

 

3)           Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

No. We must review publisher vs platform status as its now a huge anomaly. Clearly any censorship removes a provider from being a platform but they escape being responsible for both the content they block and that allowed to pass. It is impossible to censor and be a platform. This is key to understanding how any model can be a success and what law needs to be established to enforce. The principles of Net Neutrality must be at the root of all solutions and brought to life for providers and platforms through regulation and civil law.

 

4)           Should online platforms be under a legal duty to protect freedom of expression?

 

Yes

 

5)           What model of legal liability for content is most appropriate for online platforms?

 

See 3 above. I believe providers must declare whether are publisher or platform and quite different models developed for each stream. Publishers must account for what is published, platforms must not censor. Both must observe regulation of some materials. Claiming to be one and but acting as the other itself needing to be regulated. See 11 below.

 

6)           To what extent should users be allowed anonymity online?

 

Users must not be readily identifiable by other users unless the users chooses to do so. But for an account to be held in the first place, the platform provider must be able to trace back and contact the actual user behind that anonymity.

 

7)           How can technology be used to help protect the freedom of expression? 

 

Algorithms may not be the solution. We have seen the tweaking of variables in models from HMT Brexit economic projections through to COVID projections having quite wildly different, misleading and wrong results. Algorithms rely upon these variables and the underlying “inferencing” in-built.  Both are open to manipulation to favour one outcome over another.

 

8)           How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

Regulation see 5 above

 

9)           How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

Algorithms are a weak point and transparency will serve to highlight that. Without a means to enforce a independent review of those algorithms, and the precise data used to configure and service it,  you will never identify those that are  biased, faulty, inadequate etc .Algorithms will be the Achilles heel.

 

10)      How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

No submission

 

11)      To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?

 

Oligopolies have formed already and online freedoms are being curtailed. Breaking up big tech may be an answer but better might be that based on a turnover level; large companies need to be licensed to provide service. This allows regulators to examine company activity and apply a bias check initially and periodically.

 

12)      Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

 

No submission

 

 

Thank you for your time.

 

 

13 January 2021