How Destabilization of Discourse on Social Media Is Performed

Subtle, but maliciously organized attempts to guide public discourse online are now an unkept secret.

Whether facilitated by foreign nations or specialized firms, these alleged desires to influence public discourse are worrying. 

This article places the specific goals of such actors to the side. What’s important to know is that a broad intention of destabilizing public discourse, encouraging the belief in falsehoods, and breeding hateful dialogue back the specific goals of every actor in this scenario. Destabilization, distraction, and polarization are often only helpful to the granular goals such agents operate with. 

This article aims to shed light onto the general strategies involved in destabilizing public discourse on social media. 

The goal here is for you to recognize possible ulterior motives behind the discourse you’re exposed to online; to launch you high enough to view this landscape from a bird’s eye view. The actions you take based on what you discover from that vantage point are yours to consider and implement. 

This article is not a technical analysis. It is intended to be a broad introduction to covert processes that seem to be taking place. Take everything you read here with a grain of salt and ensure you do your due diligence on researching any claims made below. It is difficult to commentate with precision and accuracy on covert, ongoing operations intended to destabilize social discourse on a global scale. Everything you’re reading on this page is presented as opinion. 


Why the Destabilization of Online Discourse Is a Priority to Malicious Actors


Many of the opinions you hold today have been influenced by your consumption of online content. Even if you limit your own time online, the roots that online information spreads travel far into the minds of those within your social circle, your classrooms, and your workplace. Much of the social interaction you participate in, whether online or face to face, has been influenced – in some capacity – by information derived from the internet. 

Those implementing an organized attempt at destabilizing discourse online have good reason to hypothesize that their attempts would send ripples across the social fabric of any one society, community, or culture as a whole. 

Broadly speaking, the goals backing organized attempts to sow discord into online communications are: 

  • To introduce distrust of authority in an attempt to alter the definition of acceptable behavior
  • To separate, and introduce conflict into, groups of previously solidary opinion holders in an effort to win a battle of some capacity against that (now segmented) group
  • To distract, confuse, and birth unnecessary emotions in populations who consume online content in an effort to stifle progress / change


The Guiding Principles of Any Operation: Subtlety, Decentralization, Immediacy, Consistency



Destabilization of discourse online relies on fanning the already existing embers of momentum to set a flame in motion. It is easiest to destabilize online discourse when one’s subjects have already primed themselves with opinionated, and differing, stances.

The things that online agents of destabilization focus in on, are sensitive topics with a bell curve / normal distribution already governing the spectrum of existing public opinion on those issues. Via subtle and decentralized approaches (more on that below), the goal of malicious actors is to encourage the separation of unimodal distributions of opinion into bimodal ones. 

Broadly speaking, this is attempted by setting off a chain reaction of differing opinions which escalate in the extremity of their responses to one another. As small chunks of moderate opinion holders exposed to extreme talking points adopt certain extreme views, other subsections of that same group are enticed to readjust their stances in response. A lack of commitment to shift from what has previously been a moderate opinion can thereby be labeled wrong when the goal posts of what constitute extreme opinions have been moved. 



Any successful attempt at destabilizing online discourse will never appear to originate from any single source. Though sloppy attempts can sometimes be traced back to a single government, company, or institution, it is never their intention for that fact to be publicized. 

The two common tools of destabilized social discourse are the utilization of bot accounts and the hiring of individuals to pose as members of the general public in an effort to propagate desired talking points. The accurate estimation of how capable bot accounts are in regards to effectively participating in real social discourse is beyond the scope of this article. An educated guess would point at bots being capable of amalgamating radical talking points by studying real, existing dialogue online. 

An organized, and funded, operation would have lowered barriers to hiring a group of people to meticulously spend their days on social media. The creation of memes, population of message boards / comment sections, and generation of content online can easily, and effectively, be facilitated by a few dozen individuals working full time. 



An operation which doesn’t vary the intensity of the dialogue it injects into the sphere of social discourse would be easy to identify. In addition to that, it seems more effective to vary the intensity behind the dialogue intended to destabilize discourse from the perspective of influencing minds. The goal, as mentioned prior, is to separate the majority of moderate opinion holders into contending groups. Each individual in the moderate majority will be responsive to unique levels of extremism in the strategic communications being published. 

It is safe to assume thereby, that attempts to destabilize social media discourse encompass the deployment of rather vanilla / moderate stances on any one issue along with extreme ones. A social media account would be more believable in the extreme opinions it puts forth if its history includes some moderate, measured ones as well. 

An important aspect of subtlety in operations targeted at kicking off destabilized discourse is thereby the constant alteration of the intensity of opinions being fed. Only one extreme and polarizing opinion which baits a few individuals to react achieves the goal of the operation which guides it. Masking that one extreme polarizing opinion with a history of moderate takes does well to blend it in with the rich difference of opinions online. 



The importance of being first to present an opinion for others to react to cannot be understated. Online discourse almost universally follows a comment / reply structure. The primacy of comments correlates with the number of replies they garner; with more replies coming in the earlier a comment is publicized (all other factors being equal). Being the first to comment on any piece of content online ensures one’s chances of maximizing the number of eyes that land on that opinion. 

It is safe to assume that bots or individuals tasked with facilitating emotional / polarizing discourse online are found in greater numbers closer to the publishing date of any article or post. The goal of being first to flood supposed public reception to a piece of news for instance, is backed by a desire to get the most optimal return of any investment being made. 



The combination of the three factors above lend themselves well to continued implementation into the long term. A disciplined attempt to remain subtle and decentralized in one’s approach allows for the habitualization behind attempts to steer public conversations into the extreme. A consistent attempt to sow discord into conversations online will do well to plant enough seeds so that the chances of a fruitful chain reaction kicking off increase. 

Simply having the resources for long term attempts to destabilize discourse often serves to outpace regular individuals who may be voices of reason, but are limited by the factors of time and effort. The direction that dialogue online takes is dictated by those who spend the most time communicating in online communities. Having the advantage of being there first, for the longest length of time, is one which lends itself well to the achievement of social engineering goals. 


The Mechanisms: Kicking Off the Cycle of Separation and Distrust



With the general operating principles having been covered, it comes time to explore the common methodologies used in attempting to destabilize discourse online. This section of the article will center on the specific act of separating a moderate majority into contending / extreme subsects of opinionated individuals on a specific issue. General methods which guide the act of separating a group of moderate opinion holders into two or more opposing sides will be mentioned below. In addition, the ways in which this separation leads to distrust – an encouraged transition – will also be explored. 

Distrust, as it is used in this context, describes agents of destabilization encouraging and relishing in one side’s attempts to discredit those with the opposing viewpoint. The evocation of distrust can include introducing conspiratory reasoning to be behind the other side’s opinions, and the encouragement to falsely label members of the opposing side to be intentionally attempting to destabilize discourse. 

The states of separation and distrust are favorable to those behind the deed of destabilization. These two impacts on online discourse are cyclical in their capacity to influence one another. As people become separated in their opinions, they begin distrusting one another. As the distrust grows to unhealthy and unproductive levels, the groups in question go on to grow more separated. 

Kicking off this cycle of separation and distrust is the principal goal for those acting to destabilize online discourse. 


How to Separate the Moderate Opinion Holders Into Two or More Extreme Groups

Simple but Prolific Dissemination of Extreme Stances

The flooding of public discourse with extreme viewpoints is perhaps the most common and fundamental method of casting a baited hook into the pool that moderate opinion holders populate. This intervention is often implemented in online communities with access to a large and diverse potential audience. Popular videos, discussion boards, articles, and social media accounts can both be the sources of such extreme stances as well as a breeding ground for agents of destabilization to flood replies and comment sections posing as regular civilians. 

Two main goals govern the utilization of this method:

  1. Providing those who may be thinking, but not saying, these extreme opinions an excuse to join in on the publication of such opinions
  2. Enraging those who disagree with the opinions being put forth to the point of enticing them to voice opposite but equally extreme viewpoints themselves

The planting of “fake news” reports and articles is effective when the topic of that news is already familiar to the audience. “Fake news” relies on eliciting outrage from one end of the total spectrum of public opinion and agreement from those on the opposite end. In hopes of setting a series of back and forth reactions in motion, effective “fake news” intervention has the polarization of its audience’s views set as a first priority. Contrary to popular belief, “fake news” interventions are not designed to be believable as much as they are designed to be enraging to one opinion-holder and comforting to the other. 

The legitimacy of any one piece of “fake news” should be believable enough to influence at least a small group of the total population who hold an extreme view on the subject at hand. Various methods are used to improve the perceived legitimacy of “fake news.” The convenient misrepresentation of facts, the cherry picking of evidence, strawmanning of arguments, and the demonization of those who hold an opposite opinion are only a handful of the methods used to create effective “fake news” stories. 

Other methods of distributing extreme stances on controversial topics include: publishing reaction videos, blog posts, social media posts, and being prolific commenters on content which relates to the controversial topic in question. 

Weaponizing the Contrast of Extreme Commentary With Less Problematic, but Still Divisive, Views

As mentioned as part of the section on the subtlety of these techniques above, the varying levels of extremism utilized to kick off destabilized discourse is an important part of such campaigns. As part of that process, agents working to sow seeds of discord make a habit of weaponizing the more obnoxious aspects of even their own campaign.  

The extreme opinions being put forth by individuals or bots will entice some and enrage others. For the educated onlooker however, such extreme stances are easy to brush off as uneducated or even malicious at their roots. An effective way of appealing to the educated audience thereby, is to seemingly combat the extreme opinions being put worth will less extreme but still divisive views. 

For example, a comment on a news story about late term abortion designed to rile up emotional responses would look similar to: 

“These leftists would make killing newborns legal if they could.” 

The spectrum of likely replies to that divisive, unfactual, and outrageous comment would range from superbly vicious to tame. The stark and contrasting mark that comment makes on those perceiving it would allow other, more tame comments of the same substance to pass through when normally they wouldn’t. Those who may agree with the sentiment behind the rageful comment above may not want to support such a stark method of communicating one’s frustration about late term abortions. 

A seemingly more nuanced but just as divisive response would do well in enticing others to agree with the same sentiment without being as direct. The false condemnation of blunt attempts masks the fact that such responses are backed by similar sentiments. Such an attempt at giving those who agree with a divisive comment’s notion but not its method of delivery would look like: 

“That’s an absurd claim, but their support for late term abortions is a tad worrying to say the least.”  

The initial, hateful comment, thereby speaks to the anger some people may feel whilst reading the same article. A majority of those who agree with such a comment’s sentiment will remain silent as they interpret it. If however, someone else comes along and expresses the same notion in a less problematic way, those primed by the initial comment would be likelier to voice their agreement. 

It would make sense for destabilization campaigns to thereby assign levels of “intensity” to the agents that are set off to sow their seeds. There are operators tasked to find extreme opinions being put forth early in the life cycle of a conversation, and responding with more nuanced but still divisive views. The goal of such actors would be to lessen the barrier to entry for those who may be emotionally invested but not willing to publicly agree with such direct expressions of extreme opinion.

Open Ended Invitations / Questions to Set the Stage for Intense Back and Forths

A question can be a powerful tool in unveiling the underlying psychology of its respondents. Any controversial subject (whether post, article, or video) can have a question about it carefully crafted to entice extreme opinions. 

A question about this article which would entice opinionated, uneducated, and emotional responses would be akin to: 

“Anyone else feel this was written by someone from a foreign intelligence agency?” 

Questions are an important part of a destabilization campaign because they draw less attention than the simple dispersal of extreme opinions that hope to set off an argumentative chain. Questions are less likely to be deemed as malicious as intent is a difficult thing to prove when analyzing an open ended question. 

“What would you tell your kid if they came home telling you their teachers taught them that?” 

Questions such as the one above focus on reframing the information being presented to further entice extreme opinions to come out of the woodwork. The goal is to simply granularize the conversation about a particular controversial subject to the point of enticing members of the moderate majority to begin arguing with one another. 

The Consistent Publication of One Side of the Story in an Effort to Make It Seem Moderate

A concerted effort is placed on growing social media accounts into trusted sources of news distribution. Pages, profiles, and channels are grown to be perceived as legitimate aggregators of current news stories. 

Methods in growing social media profiles for the sole purpose of spreading selected (but otherwise legitimate) news stories seems to be a weaponized phenomenon. Profiles on the various social networks are able to influence public perception more as social media sites grow to be the most popular destinations people spend time online. 

A majority of individuals spending time online hear of any one particular news story while browsing on social media. Their perception of that news story is often already influenced prior to learning more about the situation. From there, they’re linked to the reputable news source where that specific story was published. The factor in this process being weaponized by agents of destabilization is the act of carefully selecting news stories to publish on their social media profiles, and introducing their editorialized commentary (by way of post titles, comments, etc.) early in the process. 

Most news publications make an attempt at telling both sides of their story as effectively as their biases allow them to. However, individuals with followings on social media don’t feel pressure to abide by any journalistic standards. They can elect to share news stories which fit in conveniently with a desire to destabilize discourse and propel certain talking points. This phenomenon is similar to cherry-picking otherwise legitimate evidence whilst presenting an argument on a topic. 

Such social media profiles / pages grow to be influential with the extensive knowledge and utilization of online marketing strategies along with an impressive consistency to their behavior. 


How Separation Is Morphed Into Distrust of Certain Identities

As extreme and intentional discourse finds success in guiding moderate opinion holders to assume harsher stances on either side of the issue, organized attempts behind this process seem to encourage divisiveness.

The introduction of distrust morphs the mere separation of opinions into a maintained sense of divisiveness between individuals. Divisiveness differs from separation in the fact that it travels past any one particular issue. An established sense of divisiveness in one realm of conversation has a powerful capacity to introduce distrust into the conversation about other things with the same group of perceived enemies.

Distrust is bred from the simple acts of villainizing, questioning the experience of, and discrediting those who express relatively moderate and reasoned stances in response to extreme discourse. The sole purpose of the breeding of distrust is to introduce the wedge of divisiveness between identities rather than opinions.

A sense of dehumanizing dialogue is often adopted by agents tasked with sowing discord on online communities. In an effort to cloak their goals from the public’s perception, false accusations of sowing discord directed at the innocent enable them to adopt dialogue which dehumanizes the individuals disagreeing with them.

Identity is thereby the main point of attack focused on by those who seek to introduce a powerful sense of distrust in two or more groups which disagree. Since the purpose is to introduce divisiveness in aspects of society that content online can’t reach, one’s identity is a consistent factor present in all elements of society which fits the mark to serve that purpose. The focus and attack on identity stemming from heated conversations about specified stances, opinions, and topics should be carefully attended to and studied.


How Distrust Is Maintained, Accentuated, and Breeds Further Separation:

Simply put, the divisive nature of attempts to sow discord into conversations online encourage individuals who agree on certain extreme opinions to divide off into smaller communities. These online communities act as a gathering place for people who share similar opinions on controversial issues. Though these communities would have existed without organized attempts to destabilize online discourse, their popularity / legitimacy snowballs in the presence of successful destabilization operations.

An important thing to note is the comparable growth of the respective communities which separate out of larger ones due to the division that destabilized discourse breeds. The introduction of two extreme sides to any one issue will breed the respective gatherings of both contending populations of like minded individuals. The two or more groups of gathered individuals which stem from controversial issues often grow at similar rates. These specialized communities are used as a breeding ground for even more extreme opinions on any one issue which go unchallenged.

Examples of breeding grounds of extreme opinions:

  • Smaller forums being created out of people’s negative experiences in larger ones and a desire for their extreme opinions to be heard
  • Videos made to suite one particular subset of opinion holders on a controversial topic
  • Social media pages centered on providing the extreme opinion holders of any one stance a protected space to organize and voice their displeasure

Read our analyses of current events by becoming a subscriber.

Disclaimer of Opinion: This article is presented only as opinion. It does not make any scientific, factual, or legal claims. Please critically analyze all claims made and independently decide on its validity.