Jury Duty — A Decentralised Moderation model for governing a social media platform

Fiachra Ward
9 min readFeb 20, 2022

Free Speech

‘Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers’ — Article 19 of the Universal Declaration of Human Rights, adopted in 1948.

The common complaint about moderation of any kind is how it impacts on the universal right to free speech. From a technical perspective, one could argue that complete freedom of speech would require no centralised authority that can remove or moderate content. However, the ability to remove dangerous or illegal content creates a space that is more open and inclusive for everyone, particularly marginalised individuals who may feel that their speech is chilled by interfacing with hateful threats. Therefore, moderation could also be considered paramount to living in a world free of hate-based violence and promoting more speech.

Moderation — Centralised vs Decentralised

Centralised means of moderation usually involve teams of employees and AI assisted tools for flagging, assessing, and removing content which breaches the code of conduct. This is well-regulated in centralised platforms but can cause problems when content appears to be ‘overly censored’ and is often seen as a means of pushing the agenda of the corporation in power.

On the opposite end of the spectrum are issues surrounding decentralised platforms. Here, moderation and censorship is largely removed, or community driven. Some of these platforms like Minds and Diaspora* have been infiltrated by extremists, criminal gangs and coordinated crime networks. In extreme cases, when neglected, this can turn into ‘the decentralised web of hate’. Certainly not the inclusive culture that a social network aspires to.

Finding the Balance

One of the most acute problems with centralised platforms is the need to develop one-size-fits-all moderation policies for billions of users. This is an impossible task, and fails to account for differences in culture and sensitivity levels of different individuals. Decentralising moderation puts decisions about what content should be blocked or allowed in the hands of its users and communities. Community-level moderation can happen through servers, chatrooms, or topic-based communities. User-level moderation can allow users to opt into different content preferences, or control their interactions with other users.

There are a few examples of decentralised moderation in platforms such as Matrix, Aether, and uHive. Even Reddit, while being a centralised platform has harnessed a hybrid model quite effectively where community moderators maintain subreddits. Each of these platforms have different models for involving the community in the decision-making process and almost all rely on volunteers becoming ‘community moderators’ and reviewing a large volume of cases. This model works quite well within small servers, but is less appropriate for platform-wide governance.

Platform-wide moderation by a small cohort of community moderators resembles more of a centralised means of moderation. The power is left in the hands of a minority of decision makers. These moderators are under no scrutiny, incentives to perform their duties may become an issue and community moderators will all adjudicate differently based on their own individual biases. A shared decision-making process on cases would be a fairer model — akin to a jury.

Harnessing the wisdom of crowds is a subject that’s been researched a lot and shows huge promise. This has been proven to work in fact-checking and debunking false claims as accurately as professional fact-checkers. For this to work, it’s important to preserve independence, diversity, and equality when processing crowd opinion. It could be the perfect means of fairly moderating a platform.

Jury Duty — A model of moderation used by Waivlength

Waivlength is a decentralised social media platform in development, backed by the Algorand Foundation which has a facility where any user can opt-in for ‘jury duty’. Those signed up for jury duty have a dashboard attached to their profile that autofills with newly generated reports. Jurors review the report and give their verdict. A case is not closed until it reaches a consensus beyond a certain threshold.

All jurors are required to complete a brief induction while opting-in for jury duty to familiarise themselves with the terms of service, code of conduct and grading system. Jurors are given the option to grade misdemeanours according to four categories:

0 — No action

Report was unfounded and the accused does not see any penalty. The accuser of the unwarranted report is noted. Repeated unsubstantiated reports may impact negatively on the accuser through the platform’s social consensus protocol.

1 — Grade 1

Minor breach of conduct, the offence is noted. Repeated Grade 1 infringements may start to impact negatively on the user through the social consensus protocol.

2 — Grade 2

Moderate breach of conduct, the accused receives an alert to caution them for their actions and they are penalised by the social consensus protocol. Repeated Grade 2 infringements may escalate to a Grade 3.

3 — Grade 3

Severe breach of conduct, the accused may receive a temporary suspension or permanent ban, depending on their previous disciplinary record. They are also penalised more severely by the social consensus protocol.

Process

When a piece of content is reported, the case is randomly assigned to the dashboard of three independent jurors. If the jurors all independently agree that no action is necessary and that the report was unfounded, the case is closed.

If, however, jurors disagree, the algorithm may present the case to more jurors until a majority decision is agreed upon. For low grade offences, a small number of jurors will be needed to reach consensus.

However, for higher grade breaches in conduct where the debate is between a grade 2 and grade 3 offence, more jurors will be recruited to deliver a verdict. The sanctions are higher here, so a higher threshold of consensus will be necessary.

How to uphold a fair jury?

We know from the history of the justice system that the fairest jury is one that represents a wide spectrum of age, gender, ethnicity, political beliefs and residence. An advantage of a platform-wide jury duty system is how it can use AI-driven engines to create a diverse jury for each case. When opting-in for jury duty, users will answer some of these general questions. This will help to eliminate bias from the decision-making process.

A common concern in decentralised models of governance is the lack of an incentive structure for community members to report accurately or to opt-in for jury duty in the first place. With nobody overseeing their actions, how can we be sure jurors won’t abuse the system. There needs to be an appropriate incentive system and also a penalty for those who try to game the system.

Most decentralised platforms have an integrated cryptocurrency which is native to the platform. It is a social token which can be used for platform governance, community rewards and financial transactions. Reserving a pool of token rewards for jurors is an obvious way to incentivise users to take part in community moderation efforts. Waivlength has its WAIV token for this.

Linking financial incentives to unbiased, honest reporting may not seem to be the perfect solution. Will that lead to a situation where those who are wealthy (and hence don’t need financial rewards) simply do as they please, and those who desperately need the money try to game the system to maximise their income? This is why a two-pronged approach is necessary.

Jury duty would also need to incorporate a protocol for penalising jurors who are consistently found on the wrong side of the verdict. A protocol like this has precedent — Yup.io. Yup is a web extension which allows users to earn financial rewards via tokens for rating social media content across multiple platforms. The Yup protocol is a social consensus run on a curator economy. Users are rewarded by how closely their rating relates to the average.

If a protocol rated jurors based off of how their verdict compared with the final verdict, it would be able to reward accurate jurors and present them with more cases and financial rewards (like a multiplier effect) while simultaneously penalising those who regularly reside on the wrong end of the verdict and presenting them with less cases. Waivlength has a Social Consensus protocol designed to implement this and link a user’s rating to the token rewards they will earn.

Waivlength

The Social Consensus protocol is the framework that determines the value of each user’s contribution to the Waivlength network. The token rewards that users receive are proportional to their rating determined by the protocol. The protocol recognises all forms of positive contributions — popular content creation, engaging with other content, performing jury duty, successfully referring new users, staking/holding WAIV etc. and applies a weighting to each.

Similarly, this protocol can also recognise some negative contributions and apply a negative rating on a user based on their content being flagged, or users being reported and found guilty by the community moderation system. This ensures the most influential members of the community are there on merit and cannot abuse power.

Platform-specific features for moderation

Moderation frameworks do not have a one-size-fits-all. It is important to be platform-specific, carefully designed to compliment and maintain its unique features and sections. Waivlength has two distinct sections of the platform.

  1. The Home news feed

Posts are limited to one per day in this section. The news feed is segregated into two sections, similar to TikTok’s ‘For You’ and ‘Following’ tabs. Here, you essentially have a public feed of AI recommended content and a private, curated feed exclusively of the accounts you want to see.

Moderation is important in the context of the Home news feed. This is because material can be publicly visible and discoverable to anyone. It is important that content that breaches the platform’s code of conduct is identified and stopped at source to preserve the integrity of the platform and users are appropriately sanctioned for their transgression. This is true for public posts, and indeed comments made on public posts.

2. The Hub

The Hub is the messaging section of the platform and offers a more expansive, customisable way to set up groups. These group messaging servers have no post restrictions and can be tailored for communities to chat, share content and discuss, much like Slack or Discord.

Here, moderation can be done on a server to server level, by the creators/hosts of the group. Hosts can also assign admin roles to members of the community and delegate moderation duties to certain individuals within the server. These groups can set their own culture and code of conduct and self-govern their servers. Anyone joining these groups do so at their own discretion.

Within these messaging servers, if there is a grievous breach of code of conduct, users still have the capacity to report the incident/user for community-review if they believe the perpetrator deserves more of a sanction than simply being removed from the messaging server in question.

For decentralised social media networks to gain mass adoption, it’s clear that the development of an appropriate moderation system is vital.The introduction of new incentive structures and AI tools and frameworks for maintaining accurate reporting give a lot of hope that a community-led system could prosper in the near future. Indeed, the shared decision making could make it fairer than any centralised system.

Forthcoming Updates

Waivlength is a grant recipient from the Algorand Foundation. While Waivlength developers and advisors continue to work hard over the months ahead on platform build and securing external investment, it is clear that this platform has the potential to make a huge global impact as a competitor to current mainstream social media. Learn more at www.waivlength.io where you can find a more detailed whitepaper and roadmap for the development of the platform, sign-up for the launch of the dApp and find contact details for the team.

Special Mention

As always, the fantastic work of others must be acknowledged for the inspiration they’ve provided. Big thanks to Martin Kleppmann (Twitter — @martinkl) whose work has been a very informative source of information. Also, a special word of thanks for Nir Kabessa (Twitter — @nir_III), founder of Yup whose content and ideas within the Web 3.0 space and written articles are also a great source of inspiration.

--

--