Business

Safe by Design

Note: This post will be updated based on feedback. See end for changes made.

Open digital spaces have been a real boon for people who want to find and connect with others who share their values and life-experience. Families, workmates, school friends, clubs and community groups have all benefited from the ability to connect and share. These communities are built on social networks such as Facebook and Twitter – platforms that ultimately serve advertisers and shareholders. But the models and methods used to maximise value for those shareholders have ultimately been causing harm to individuals and society.

Some will argue that the right to free speech is absolute, move along, there is nothing to see here. Others will argue that any limits on that speech is censorship, and that the free market (for business and ideas) must prevail. Some will say that until we have resolved the complex issues surrounding speech we should not do anything – we must have a complete solution, and anything else will just cause more harm.

But in looking at better models for digital spaces we don’t actually need to resolve these issues at all, and I firmly believe that in some cases attempts to do so have slowed progress on designing something else.

This post contains some ideas, framed as ‘safe by design’ principles. These could be applied when building a digital space such as a social network to ensure that controls are in place to ensure a good user experience for everyone (which includes being safe). I don’t lay claim to any special insights and this isn’t a definitive list, but more a starting point for a discussion.

Principle One: The burden of dealing with harm should not fall on the targets of that harm

This foundational principle is one that should govern the application of all other principles and design work. The question to ask when building anything that attempts to mitigate, reduce or remove harm is, where does the burden of doing this work fall?

This principle does raise a much harder question: in an environment where anyone can join and post without the content being reviewed first (either by software or a human), how do we ensure that this (burden) does not happen?

Principle Two: Individuals define what safety/harm looks like

In an open space where anyone can comment on anything or say what they want, friction arises when people do not agree with what is said, whatever the reason. Some of this content is offensive to almost everyone, some is targeted at certain groups, and some is just thoughtless or pointless. Regardless of any intent, or lack thereof, digital spaces can quickly become places of harm for some people.

Harm must always be defined and understood from the perspective of the person experiencing that harm. There isn’t a truly objective measure of harm that can be magically applied to all content, because experience and context is different for everyone. When people object to content, I have seen responses such as, “I did not find that offensive, so why should anyone else”, or, “That is just an opinion/idea, I don’t see why you are so upset about it”, or even worse, claiming that someone is a “snowflake” or “thin-skinned”. This is unhelpful and demonstrates a lack of empathy.

Another challenge is the direct abuse directed at some people. The design of digital spaces tends to amplify this content through engagement – people calling the abuser out, trying to shout them down, others piling on, and so on. We know from psychology that a combination of belief perseverance and confirmation bias conspire to make it nearly impossible to change people’s minds. Arguing is pointless, but doing so (especially if there is an algorithm) increases the visibility of that content. It can also cause further harm to communities when allies pile on trying to help.

Let people decide for themselves what is safe or harmful, and then give them the tools to signal the latter to the platform, but with reference to principle one.

Principle Three: Individuals can control what they see

Many current platforms have the facility to block people, and to filter certain words or content. Why more people do not use these, particularly blocking, is beyond me, but these features are not enough. If a platform does not have at least these features, it is not safe by design.

One option in this area could be to allow users to tag others as being part of a community, with this tagging being moderated by that community. You can be added to a community tag, and existing members can remove that tag (perhaps three votes to remove or review) if you misbehave. Some community tags will only be seen if you are in the community already (private tags), for safety reasons.

The community tag would allow people to filter their timeline, with the option to exclude non-community replies. This ensures that poor behaviour is not seen by the community it is directed at, although others could flag or block the content. People could also follow public tags, or privates tags with permission, without having to follow each of the members individually.

Blocking should be easy by default, with accounts reaching a certain ‘blocked’ threshold over a specific time period being temporary locked from posting. These could then be reviewed by a human, and suspended as needed.

The ability to have a mix of untagged, open tagged, and private tagged people in your feed allows for better curation and control, and a better user experience.

Blocking is one signal that a user’s behaviour requires review. Another might be to add a ‘requires moderation’ flag for posts. If enough people mark something, it gets hidden for checking by staff.

I do see some issues with using ‘the wisdom of the crowd’ to signal problems with content. The system could be gamed, used as retribution, or for harassment, and so on. As noted above, this is an attempt to start a discussion, and careful design and testing of features with real people, and responding to real-time feedback is going to be vital to finding workable solutions.

In the end, the community must decide what it finds acceptable. If it was quick to block someone, and for the platform to remove that persons content (posts and replies) from visibility within your community, is this enough?

Principle Four: Individual’s activity is not tracked

All of the current social networks use surveillance capitalism as their primary business model. This requires that every user action is tracked, with the data being used to manipulate people so that they stay on the platform longer. The ultimate aim? Show you more advertisements, and (based on what they know about you) hopefully ones that you will click on, or watch. You are the product, being sold to the highest (advertising) bidder.

The downside of this approach is that content that causes strong emotions is amplified. In most cases, it is negative emotions – anger, disgust, outrage – in the driving seat.

Let’s have a platform where content is promoted based on its actual merits, and not weaponised against users in the name of profit.

Principle Five: Algorithmic Transparency

If there is an algorithm in use to prioritise content in a users’ timelines it should publicly available, and subject to review, criticism and modification. The impact the algorithm has on user should also be publicly available for research and review.

Users should be given a choice of chronological or algorithmic, and both should always include any blocks or filters they have applied.

Some other things to consider

In designing a digital space for use by the public, we should consider that regardless of the right to free speech, individuals and companies do not have a right to force or require others to read or propagate their speech. A publisher does not have to accept your book proposal. A paper does not have to publish your letter to the editor. You cannot be forced to stay and listen to a talk you do not like or agree with. Why should online be any different? (I won’t argue this point with you, I just ask that you to think about it.)

We should also consider if digital spaces should be invitation only. Some Facebook groups and Slack channels are just two examples of where this already happens informally.

Invite-only spaces act as a filter because (in theory) people are going to invite people they know and want on the platform. You are personally vouching for the people you invite. The argument against this is that there is some exclusivity, but I think psychologically this is an advantage because it incentivises those who are on the platform to be more careful about who they invite. It’s not just a free-for-all platform.

An alternative is a probation period for all new people joining the platform.

The business model is another point to carefully consider. The model for most startups is to attain product-market fit (i.e. the product serves a need), and then to grow as fast as possible in the hope of reaching a scale where income exceeds expenditure. There is often a hope that even if the business is not profitable, it will be acquired because of its growth, the size of its user base, or the potential to merge with some other business to drive profit or growth. Much of this is funded by venture capitalists or investors, hoping for a big payday on at least one of the companies they fund. This is the way. But most fail.

This focus on growth to achieve scale drives many behaviours that do result in useful features, but they ultimately cause a platforms downfall. A lot of things just do not, or cannot, scale. For example, social networks have had to retrofit safety valves such as features to report abuse, and human moderation.

All of the social networks and major search engines are advertising platforms. They allow advertisers to target their message to people who might be more likely to be interested than say, and ad in the local paper. If you have seen the film The Matrix, you are just a battery, powering someone else’s profit machine.

Would people pay $1 a month? I have no idea if you can fund a platform at scale this way, but perhaps by flipping the model and making the user the customer, it could work? For example. I have noticed quite a few Mastodon instances publish their costs, and people chip in. This shows that if the motivation and values align, people are willing to pay.

Businesses built for growth without a viable business model are inherently unstable, and ultimately not sustainable. What would a traditional bootstrapped (or lightly funded) social network that was built on principles like the ones above look like? Would anyone be willing to try, trading the chance of a quick return and a big payout for a sustainable business that benefits society and its members?

I hope so.

Changelog

Add a new first principle.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s