People are increasingly finding themselves to be moderating online communities.
As online tools develop a sense of reliability behind their functions, new opportunities to create unique communities arise. Online classrooms, work from home initiatives, niche discussion forums, and comment sections under the content you publish call for some kind of behavior moderation.
The presence of bad actors in online communities is a statistical given. Bad actors can taint a professional online community, ruin the experience of existing members of a community, and can ultimately discourage new members from participating in that community.
Intervention against behavior deemed “bad” online is a lightning rod for debate. Bad actors are quick to cite their free speech rights, and well mannered critics of your community are quick to raise concern over bold censorship methods you may employ.
The “slippery slope” arguments that members of your community raise in the face of drastic intervention methods are well intentioned and are often right. They will make your job moderating an online community more difficult than you anticipated it being as these topics demand profound consideration.
This article aims to explore the effectiveness of modifying online behavior with a series of small inconveniences rather than drastic censorship methods.
First, the Difference Between Small Inconveniences and Big Walls
Small inconveniences in response to bad actors online are generally personalized, transparent, and serve as an annoyance more than a full stop.
Small inconveniences in regard to modifying behavior online commonly follow these general principles:
- Matching offline and online identities by way of stringent authentication methods
- Segregating and pointing out troublesome accounts / communities for others to know about and act around with discretion
- Limiting bad actors in the freedom / quantity / reach of their communications
- Illuminating a path back to an unrestricted online experience
Big walls, on the other hand, all end up implementing drastic censorship methods against accounts, posts, and communities which are deemed to be acting in bad faith and breaking general overarching rules. Permanent removals, bans, and community closures are at the core of “big wall” intervention methods.
Examples of small inconveniences:
- Additional account verification steps (ID, phone, email, etc.)
- Public facing labels / tags (NSFW, mature content, not suitable for advertisements, etc.)
- Limitations to instances of communication (number of comments per hour, new account restrictions)
- Transparent warnings / removals of rule breaking communications, not the accounts that made them
- Real name / identity requirements
- Real profile picture requirements and guidelines
Examples of “big wall” interventions:
- Account / IP / device / shadow bans
- Widespread censorship and removal of general topics / words across the board
- Drastic UI changes
- Forced downloads, newsletters, paywalls
- Zero tolerance, permanent interventions which punish the actor behind the act, not the act itself
Capitalizing on the “Doesn’t Affect Me” Reasoning
Managing onlookers’ responses to drastic methods of behavior modification is difficult to do. Permanent, broad, and damaging strokes of the authoritative brush you wield are likely to cause concern in the innocent members of your community. Though their behavior may not warrant drastic modification methods today, they will grow wary of what you may deem inappropriate tomorrow.
Drastic and opaque moderation methods are devastating enough to birth fear in even the most innocent online community participants. Your community management methods will assume a stringent identity, and the freedom with which those around you express their thoughts will likely suffer.
When you employ a “small inconvenience” approach to behavior modification, you’ll naturally be forced to be transparent in your reasoning for taking disciplinary action. Your reasons will need to be specific, and your action will be likelier to be perceived as fitting for the “crime” that took place.
Onlookers of your disciplinary acts against bad actors in your online communities will be encouraged to develop a “that doesn’t affect me,” interpretation of your interventions. They will gain a clearer and calmer understanding of the events which have transpired in the online community and will be less likely to fall into a series of worrying speculations behind your acts as an authority figure.
Small inconveniences thereby do not encourage others to come to the defense of those you deemed to be breaking your community rules. By developing a safe, “that doesn’t affect me,” feeling among onlookers to your intervention methods, you’ll be better able to segment and isolate any bad actors within your online communities.
Ownership of the Decision to Stop or Leave
Successfully enticing a feeling of ownership in scenarios where bad actors plague your online community is an important goal. It is optimal to make an individual want to stop behaving in a way you deem to be inappropriate rather than be made to feel as though they’ve been forced to.
Drastic intervention methods such as widespread dialogue suppression and account bans will leave your subjects feel as though they’ve been forced to stop their behavior. That feeling is likelier to give birth to desires to circumvent bans and censorship strategies. They’ll be likelier to conjure conspiracy theories as to the reasons behind your drastic intervention methods, resulting in the general reputation of your online community having new wrinkles introduced.
A bad actor’s drive toward maintaining a sense of autonomy should be respected as you try to dissuade them from incrementally raising the bar in how they act toward you and your community.
A sense of ownership over the tendency to stop their troubling behavior is easier elicited by way of small inconveniences. They’d be likelier to consider the inconveniences you place in front of them to be more annoying than they are challenging.
A rule-breaker’s decision to place effort into fighting back is harder to make if your intervention methods remain as trivial, insignificant annoyances. Their competitive drive to circumvent your interventions won’t be perked if you employ a series of inconvenient hoops for them to jump through rather than erect an obnoxious impassable wall.
Curbing Their Desire to Bite Down on the Mouth-guard and Swing
In what seems to be related to the Streisand effect, the perception of a powerful enemy encourages people to fight back harder than if they perceive their enemy to be weak. A drastic intervention in the face of rule-breakers in your online communities will encourage them to drastically reciprocate in their response.
Those who feel as though they’ve been cornered and excluded by way of your drastic behavior interventions will be susceptible to protesting, seeking revenge, and brigading your online community. The higher the wall you build against bad actors in your communities, the harder they’ll try to topple it.
Setting up smaller inconveniences in response to their undesirable behavior will render any drastic reactions on their part as overreactions. You’d thereby trap bad actors in deciding to either overreact or react proportionally to the small inconveniences you place in front of them.
Their overreactions can be actioned appropriately should these individuals decide to take that route. Appropriate reactions to your smaller interventions on the other hand, will not draw attention and pose less danger to your community.
Incremental Steps Are Difficult to Keep Ringing Alarms For
In the face of widespread unacceptable behavior in your online community, an incremental series of steps which shape that group’s behavior may be necessary to employ.
Your final vision of acceptable behavior in the online community you manage can often be reverse engineered to a series of steps which lead to that vision. Deploying your intervention methods in a series of small, inconvenient but withstand-able increments spaced out in time will discourage drastic protests.
Akin to slowly and incrementally dipping your body into a cold pool, modifying your online community’s group behavior incrementally will normalize changes which would be a shock if implemented all at once.
Rather than pushing your drastic visions of what your community should look and feel like onto the members which comprise it, implement those changes slowly and in small doses. In doing so, you’ll ensure that interest in the changes you employ remains low, thereby relying on the inattentive forgetfulness of your more vocal members to give you breathing room to work.
Ease and Mobility of Implementation
It’s difficult to revisit what was intended to be a permanent act. As a human tasked to enforce rules online, your own psychological pitfalls should be considered in the analysis of proper intervention. By being drastic and final in your disciplinary acts, you will paint yourself into a corner of having to either defend your actions emphatically, or admit your wrongfulness.
Drastic acts leave less space for nuance and reconsideration.
Remaining mobile in your disciplinary acts is thereby a requirement for ethical intervention. Your understanding of situations will change as more information is presented. The judgments you make at face value will morph, and you’ll be pressured to dive into the details and nuances of each particular case.
Your implementation of small inconveniences in an effort to morph online behavior toward the positive will allow you to remain free in your mobility as a moderator. The inconveniences you set in front of individuals you decide to be acting in bad faith will be easy to either increase in strength or adjourn in light of new evidence.
You wouldn’t feel pressured to explain your disciplinary acts, as small inconveniences are easier to connect to specific rules which govern your online community. In leaving less space for subjectivity, an open dialogue can take place between yourself and those who plead innocence in the face of your disciplinary acts.