Conversational Game Theory Case Study: “Wiki Wars” on Wikipedia and MediaWiki

Rome Viharo
16 min readDec 18, 2021

--

Analyzing the Dynamics of Conflict in Wikipedia Governance: A Case Study

The MediaWiki Case Studies

Initially, this study concentrated on how majority and minority viewpoints contribute to consensus-building in contentious Wikipedia articles during “wiki wars.” However, it has since expanded to examine the dynamics of consensus building among individuals or groups, particularly focusing on three distinct behaviors exhibited in the process:

  1. Deceptive Practices: Individuals who engage in deception to sway online consensus.
  2. Intimidation Tactics: Individuals who seek to influence consensus through intimidation, bullying, or non-resolving communication.
  3. Control of Participation: Individuals who attempt to monopolize permissions and access within the consensus-building process.

These behaviors create three “arcs” within the framework of Conversational Game Theory (CGT), which is designed for conflict resolution. The three narrative events — termed “Heat, Mirrors, and Shadows” — were informed by interactions observed on Wikipedia.

Conversational Game Theory utilizes an algorithmic narrative structure to impose order on the chaos of online discussions, generating a total of nine narrative events that unfold through decision-making and interaction. The three behaviors mentioned can predict when contentious consensus building arises, with the case study documenting these events as they occurred in real-time across social media platforms, particularly focusing on Wikipedia with some exceptions.

The algorithm for CGT was developed specifically to address these behaviors and was completed in 2020.

The case study explores majority and minority editing dynamics on Wikipedia, traversing three particularly controversial areas: Biographies of Living Persons (BLP), Fringe Subjects (WP: Fringe), and Paid/Agenda Editing.

It presents both first-person narratives and third-party evidence highlighting instances of “harassment,” as defined by Wikipedia’s community editing guidelines, referred to here as editor suppression.

Editor suppression represents a community strategy aimed at controlling editing permissions on Wikipedia.

This study investigates how agenda-driven groups (often likened to troll farms) can manipulate articles on Wikipedia and shape public perception on various MediaWiki platforms.

The case study focuses on two high-profile wiki wars involving the biographical entries of Rupert Sheldrake and Deepak Chopra, with which I engaged directly throughout the research process. I maintain complete transparency regarding my relationships with these individuals.

The ongoing conflict between two distinct editing communities on Wikipedia — the “woo” proponents and the “skeptics” — reveals genuine concerns about maintaining the integrity of the platform, particularly regarding alternative health claims.

While Wikipedia is justified in policing these issues, it also bears the responsibility to approach controversial subjects with the same standards applied to all topics, recognizing that even contentious subjects can present valid editorial concerns and reasonable boundaries.

In 2013, I began a project to edit a Wikipedia article involved in a “wiki war” as part of my independent research into Conversational Game Theory (CGT), a framework for online conflict resolution. I communicated my exploratory intent to fellow editors on Wikipedia, but some responded with suspicion, accusing me of aligning with pseudoscience and violating Wikipedia’s rules.

The response to my presence on this Wikipedia article was swift, and within three days, I was doxxed by a small group of editors, violating multiple Wikipedia policies, obviously.

My name was circulated among this small collaborative of editors and then my name was published on RationalWiki, a site mimicking Wikipedia but lacking its neutrality. This tactic aimed to undermine my credibility and push for my expulsion from the community, ultimately discrediting my research.

And this is the game on Wikipedia, in Wiki Wars, the reputation of the editors quickly comes into play because reputation on Wikipedia is the real currency, and if you do not have credibility on Wikipedia, you will likely be blocked from editing or even banned all together.

This situation became a game of cat and mouse, focusing on how quickly my reputation could be tarnished within Wikipedia’s open editing environment. I aimed to understand this phenomenon through the lens of CGT, which predicts the behavior of individuals who manipulate systems for their advantage. In this context, certain editors wielded significant power, seeking to justify my ban under the pretense of maintaining community standards.

I was accused of all sorts of things in this process. Sock puppetry, trolling, disruption. What made these accusations rediculous from my perspective was the amount of time preparing for this “wiki war” battle and learning Wikipedia’s rules. I even hired a Wikipedia editor as a consultant to help me navigate. Sticking to the rules was an important part of my research into Wikiwars. Despite my intent to explore how CGT could enhance Wikipedia’s governance, my reputation was systematically undermined, consistent with CGT’s predictions regarding editing permissions.

Instead of being recognized as an independent researcher, I was labeled a promoter of pseudoscience, a notorious troll, and even a sociopath engaging in a dangerous practical joke. This personal toll was significant, yet I found solace in the insights gained from this experience. It highlighted how truth and deceit can be distorted within toxic online communities engaged in power struggles and vanity-driven pursuits.

What creates this sort of toxicity?

My investigation examined how Wikipedia’s software architecture and user interface create a competitive governance system with hidden rules that contradict its stated commitment to collaboration. MediaWiki gives certain users “policing powers” over other users, and as expected, these policing powers can be easily abused.

What began as a targeted attack on my reputation evolved into a predictable, bizarre narrative. My research effectively countered the tactics employed against me, demonstrating that the chaos surrounding my identity blinded my detractors to the possibility that my assertions might be valid. My sincere participation in Wikipedia led to increased attempts to undermine me, ultimately resulting in acts of censorship and deception, reminiscent of governance conflicts seen in broader political contexts.

Despite the trivial stakes of Wikipedia editing, the personal consequences were substantial. My years of anonymous online research were misrepresented, entwining my identity in a narrative that pitted my truth against that of my adversaries. The emerging defamatory narrative served as a measure of CGT’s effectiveness: the more effective CGT became, the more peculiar and propagandistic the narrative against me.

The ongoing conflict created chaos, with each attack providing opportunities for deeper exploration into online consensus building. I observed CGT’s influence in real time, even without the ability to edit my own representation. As I adopted a “tit for tat” strategy, each attack inspired a blog post on a WordPress site I created titled Wikipedia, We Have a Problem, leveraging feedback loops identified in CGT.

Throughout this ordeal, I maintained transparency with the Wikipedia editors targeting me, informing them that their tactics would be part of my study on conflict resolution. However, their distorted narrative prevented them from taking my assertions seriously. Despite my openness, they continued their harassment, attempting to drive me away from Wikipedia through impersonations and psychological attacks, which also appeared on RationalWiki after my doxxing.

The hostility I faced as an outsider within the Wikipedia community was profound, an experience I had not anticipated. I initially held idealistic views of Wikipedia, believing I could foster consensus and navigate its governance mechanisms successfully. Although I achieved consensus on two occasions, the reality of Wikipedia’s functioning diverged significantly from my expectations.

The campaign against me sought to discredit my character using vicious and inhumane methods, culminating in actions so extreme that I felt compelled to report the situation to the FBI. This serious step highlighted the situation’s severity, impacting my professional reputation, built on integrity and problem-solving skills. This seemingly trivial wiki war revealed a troubling pattern of targeting, stalking, and impersonation as intimidation tactics among Wikipedia editors.

Ultimately, this experience provided invaluable insights into online discourse and significantly informed the development of CGT, raising critical questions about how conversations evolve and how consensus is formed in the presence of bad faith actors who intentionally disrupt community perceptions.

Exploring Wiki Wars: The Dynamics of Reputation and Editing on Wikipedia

What occurs when individuals hostile to you are the ones editing your Wikipedia article, while neutral or supportive editors are intimidated away? This scenario raises significant concerns for anyone with a notable reputation on Wikipedia and creates fertile ground for a “wiki war.”

This case study examines the phenomenon of “Wiki Wars,” focusing on how majority and minority perspectives collaborate to write articles on Wikipedia. It delves into the complexities of Wikipedia’s governance system and aims to understand the decision-making mechanisms in challenging editorial contexts. Ultimately, who determines what gets published during a wiki war?

To explore these questions, I adopted a “gonzo” research approach, immersing myself in a wiki war to document my observations and learn from any mistakes. My focus was on three critical areas that challenge Wikipedia’s governance model: Biographies of Living Persons (BLP), Fringe Topics, and Paid Editing. Understanding how converging perspectives reach consensus and manage conflict is essential for developing Conversational Game Theory (CGT), a computational approach to online conflict resolution and collaborative editing.

For years, I avoided discussions on contentious topics, such as politics or pharmaceuticals, fearing that my inexperience might expose me to the chaos of wiki wars. However, in 2013, I initiated a case study centered on a seemingly benign wiki war that attracted mainstream media attention, including coverage from the BBC. This conflict revolved around Rupert Sheldrake, a Cambridge biologist whose controversial views sparked backlash from the skeptic community and organizations like the James Randi Foundation, leading to a significant editorial struggle over his biography.

The situation quickly escalated, revealing a divide between “skeptic” editors, who sought to protect Wikipedia from what they viewed as pseudoscience, and various contributors advocating for alternative perspectives, including neutrality — a principle central to Wikipedia’s policy. I mistakenly believed this community would offer a less contentious environment than typical political wiki wars, an error I recognized as my research progressed.

As I investigated further, I discovered a network of Wikipedia editors — comprising university professors, podcasters, and influencers within the activist skeptic community — who exploited the platform to promote their views while undermining the cultural figures they criticized. This mirrors a broader trend in influencer culture, where individuals gain attention and status by critiquing subjects, often leading to financial incentives. Many of these editors sought recognition for their efforts, establishing a hierarchy of influence.

I found that they manipulated Wikipedia as a tool to impose divisive perspectives, frequently for trivial purposes, such as increasing views for their online content. Consequently, wiki wars can become painfully petty.

These editors operated within a complex web of communication, turning Wikipedia into a battleground for ideological conflict. The stakes were perceived as high enough that many felt compelled to manipulate Wikipedia’s rules through a shadow governance system, raising important questions about who controls editing permissions and how the platform is policed by volunteers who wield authority yet remain vulnerable.

My research revealed that these Wikipedia editors employed tactics similar to those of disinformation troll farms commonly seen in political culture. Over five hundred fake editing accounts were identified, utilized to influence consensus outcomes or harass editors with differing viewpoints.

This is particularly concerning given that the skeptic community is one of the largest social networks online, actively shaping narratives and building consensus. The emergence of GamerGate further complicated these dynamics, leading to political divisions within the skeptic community. This polarization highlights the need for frameworks like CGT to foster constructive dialogue amid the rising toxicity of online interactions.

This eight-year case study aims to clarify these complexities and contribute to effective strategies for conflict resolution and consensus building in digital environments.

How Do Different Perspectives Collaborate on Wikipedia?

This was the central question driving my inquiry. In my study, I focused on some critical facts about two individuals. Both of their Wikipedia articles shared a significant issue: essential information that should have been included at the beginning was missing. This information was the first thing anyone would see when researching these individuals.

Deepak Chopra is a medical doctor, which is an important aspect of his biography. Similarly, Rupert Sheldrake is a biologist with a degree from Cambridge, a detail that is crucial to understanding his background and expertise. However, instead of presenting these accurate descriptions, the articles labeled Deepak as a “New Age Guru” and Rupert as a “parapsychologist.”

This small collective of Wikipedia editors made it clear that they would not acknowledge these individuals’ credentials, justifying the exclusion of these factual details. This led them to dismiss editors who sought to include the correct information, labeling them as “promoters of pseudoscience.”

This situation prompted me to apply a framework called Conversational Game Theory (CGT) to better understand how consensus is built in this context. My goal was to explore how differing opinions can work together to arrive at a more accurate and fair representation of these individuals on Wikipedia.

Rupert Sheldrake

Biography of a Living Person; Rupert Sheldrake

Editing accounts: Tumbleman (talk | contribs), Philosophyfellow (contribs)

Deepak Chopra

Deepak Chopra reached out to me after hearing about the case study on Rupert’s biography, as the two individuals are acquainted. Our relationship was primarily professional for a few months, during which I served as his media representative on Wikipedia. Since the completion of the case study, we have not maintained further contact.

When we first met, Deepak expressed genuine interest in both Aiki Wiki and my research. He generously offered a donation of $5,000 to support the project, which I accepted and put to good use. Following this, he inquired about my professional services, seeking assistance with his Wikipedia article. Regardless of personal opinions about Chopra, his biographical article was indeed a factual mess.

I proposed that he hire me as his direct media representative for the article, presenting an opportunity to explore the dynamics of “paid editing.” This raised an intriguing question: what happens when a public figure faces legitimate issues with their biographical article? Wikipedia allows individuals to hire representatives, but those representatives cannot make edits; they can only negotiate with the community. Operating from this constrained position, I employed the methodology of Aiki Wiki to facilitate changes to Deepak’s article, leveraging the influence of the community without making any direct edits myself.

This effort for Conversational Game Theory (CGT) consensus took approximately 30 days and succeeded in achieving a consensus in a challenging and toxic environment, relying solely on the decision-making processes of human editors with strongly differing perspectives.

However, this victory was short-lived. I was doxxed twice on Wikipedia, and my identity was disclosed, leading to my ban once again. The consensus we had achieved unraveled as they successfully removed my presence from the consensus process. This ability to censor or suppress a perspective within a consensus is crucial to the ongoing development of CGT.

Moreover, both Rupert Sheldrake and Deepak Chopra have legitimate concerns regarding their articles, independent of any critiques related to their work. In both instances, malicious and intentional negative framing and representations of these living individuals were manipulated by Wikipedia editors. These editors exploited Wikipedia’s system to suppress its own processes, denying these two public figures the recourse that the Wikimedia Foundation typically grants.

Biography of a Living Person; Deepak Chopra

Editing account: SAS81 (talk | contribs)

RationalWiki — the troll farm MediaWiki.

RationalWiki can easily become a platform to harass individuals on the internet — -and a few of the individuals who use the platform participated in this case study after full disclosure.

Meet a MediaWiki troll farm; RationalWiki

The activities of the troll farm are extensively documented on MediaWiki platforms, which facilitates the validation of my case study.

The disinformation campaign directed against me has been one of the more striking developments in my research, and I take pride in exposing a narrative that was clearly misrepresented, alongside the actions of a small group of dedicated Wikipedia editors who were visibly displeased with the publication of this study.

Through this case study, and with assistance from others online, I was able to identify well-known internet trolls, twin brothers from the UK, who managed a network of over five hundred Wikipedia editing accounts. These accounts were employed in various influence and harassment campaigns, even affecting media narratives. To this day, these individuals continue to engage in online harassment and have demonstrated a notable proficiency in impersonation, showcasing their skill in creating misleading identities.

Meet the Wikipedia sockpuppet army. (Manul/Vzaak, Atlantid, Goblin Face, Dan Skeptic, etc)

The troll farm of experienced Wikipedia and MediaWiki editors often finds a platform on RationalWiki, which has contributed to my case study by disseminating misinformation about the study and targeting me as the author. Being at the center of this agitated troll farm provided a unique opportunity to understand how “actual” subjects or events can be distorted by disinformation and misinformation.

RationalWiki emerged as a response to conservative editors on Wikipedia, who founded “Conservapedia” — a MediaWiki alternative designed to counter what they perceived as bias on Wikipedia. In contrast, RationalWiki began as a platform to critique and troll these conservative counterparts. With its branding designed to resemble Wikipedia, RationalWiki exemplifies how ideological conflicts originating on Wikipedia can lead to the creation of alternative platforms that perpetuate lost narratives or spread misinformation about opposing viewpoints.

Due to the targeting and misinformation directed at me and the case study, the work now serves a dual purpose. Wikipedia, We Have a Problem not only recounts the events of my research but also aims to negotiate and build consensus in light of the various troll farms disseminating false information about me. This has posed a significant challenge, as it directly impacts my professional life.

In confronting actual “skeptic communities” — who self-identify as skeptics on their social media profiles — I faced claims that my intention in publishing Wikipedia, We Have a Problem was to “promote pseudoscience.” This accusation is utterly unfounded, as I have been transparent about my intentions and research from the outset.

Fallout

The case study also uncovered activities of publicly known Wikipedia editors, whose editing accounts revealed their identities. My first experience in a wiki war illuminated how Wikipedia is used by academics and influencers to engage in personal disputes and “tit for tat” actions against colleagues they dislike.

Notably, well-known figures like Professor Jerry Coyne from the University of Chicago, along with other skeptic bloggers such as Tim Farley, were found to utilize Wikipedia as a platform for their specific brand of activism. Their targets primarily include Rupert Sheldrake and Deepak Chopra, demonstrating how personal grudges can influence editorial content on the platform.

Meet the leaders of the troll farms

I find that I don’t have significant ideological differences with many perspectives within RationalWiki or among skeptics, particularly regarding pro-science positions or the publication of progressive policies. However, I strongly object to the harassment and the divisive, often abusive tactics used to promote science or progressive politics. This specific acerbic style of confrontation — employing snark in what is supposed to be “rational” discourse — targets perceived ideological foes on the far right.

The Wikipedia editors involved are among the earliest influencers to realize that a caustic online presence yields greater engagement. Being acerbic, divisive, and confrontational garnered more attention than the straightforward summaries typically found in scientific discourse. Thus, identifying as a “Skeptic” became a strategy for online influence, a means to attract readers to their content.

I critique any scientist who chooses this path, as I believe it is detrimental to the dissemination of science. If this style of communicating science is effective, why do more individuals in 2023 believe in a Flat Earth than ever before? This reflects how poorly “snarky skepticism” serves the public good.

More people today distrust science than ever before, and I don’t think these particular skeptic influencers recognize the damage their approach causes. At the end of the day, a troll farm remains a troll farm.

I did not anticipate becoming a target of harassment when I began this study, nor did I foresee how deeply it would extend into the darker corners of “wiki” communities. I never imagined that this experience would provide me with unique insights and a deeper understanding of the psychological dynamics that lurk beneath the surface of the internet. Over the past four years, these experiences have allowed me to refine the deep computational narrative structures of Conversational Game Theory (CGT), for which I am profoundly grateful.

Case study commentary

Part Three: Comparative study; other “troll farms”

--

--

Rome Viharo
Rome Viharo

Written by Rome Viharo

https://bit.ly/RomeViharo is the founder and creator of Conversational Game Theory, enhancement layer for LLMs, win win for humans and AI

No responses yet