Rome Viharo published case study on Wikipedia troll farms, became focus of disinformation campaign in response.
For the uninitiated, Rome Viharo was/is Wikipedia’s Abused Stepson, the guy who tried to balance the BLP (“biography of a living person” to spell out the Wikipediaese) of Rupert Sheldrake which was being loaded to near-libel by the Guerrilla Skeptics, was kicked out of en.Wikipedia, then “earned” a very stupid article on RationalWiki, to which Viharo ran a blog trying to explain the mess titled Wikipedia, We Have a Problem (2013–2019)––Strelnikov; blogger, Wikipedia sucks and so do its critics.
Wikipedia, as well as all other MediaWiki platforms––all share a singular notion about the brand “wiki” in common. A “Wiki” is a type of encyclopedia that “anyone can edit.”
While the platform architecture allows functionality that anyone can edit or participate, the community of editors follow another rule…while anyone can edit, not everybody should.
And if some can edit and not others, which others?
How do you decide who can and cannot edit an article?
When there is a dispute over this answer, a “wiki war” can emerge. Once a wiki war emerges, it becomes a battle of power, influence, perception and permission. Blood will be drawn, many GamerGates will happen.
Over the years, I, Rome Viharo, have found myself entangled in a web of bizarre yet creative deception while exploring the editor permission problem on Wikipedia and the phenomenon of “wiki wars” on controversial articles back in 2013. At the time I considered myself a Wikipedia idealist, fascinated by Wikipedia’s decentralized and open source structure, and even praising it for it’s potential for governance, something I highlighted in a 2011 TEDx talk that I gave about social media, governance, and story telling.
I volunteered to help edit a contentious Wikipedia article, being transparent about my research in conflict resolution, consensus building and collective editing, declaring myself “neutral” to any positions on the topic, I was simply agnostic to the situation and I could not have made this more clear to the Wikipedia editing community.
I posted this transparent declaration Wikipedia in 2013, as it is still preserved by the system of MediaWiki.
In response to my quite transparent declaration, I was doxxed, my privacy invaded, especially around years of research I was doing online, anonymously––and within a few weeks my anonymous editing account on Wikipedia was published as my actual identity on a MediaWiki site known as RationalWiki, treating me as if an internet kook researcher, promoter of pseudoscience who was caught “trolling” Wikipedia, and claiming me a conspiracy theorist about “skeptics editing Wikipedia”.
I encountered a variation of a troll farm, deeply embedded into Wikipedia’s architecture and governance. This has vastly informed Aiki Wiki, a sophisticated computational algorithm for online consensus building, the exact project I informed them I was researching back in 2013.
Their concern with me was that my consensus that was building would achieve a majority consensus on the article––a permission to change the article. What they were running was a true strategy, quite effective on Wikipedia, for gaming Wikipedia’s system to control editing permissions.
To solve the editor permission problem, Wikipedia and MediaWiki gives “policing” power to determine who can and cannot edit over to a hierarchy of Wikipedia editors and admins, opening up something almost identical to an online digital governance system, with various levels of dispute resolution, all using MediaWiki, which actually delivers something close to a blockchain, a permanently stored archive of everything that gets published and which cannot be removed or altered.
I wanted to test every end of this process for my own research into online consensus building. To determine how editing permissions are distributed in an open system while not forcing censorship is not a trivial problem. Aiki Wiki too must have a solution to this problem, and the outcome of this research would determine if Aiki Wiki could even solve it at all.
So how far would the “heat” of our dispute take us across MediaWiki platforms? How far would this collective go from first applying various tactics of verbal bullying, personal attacks into deceptions about the narrative of who Rome Viharo is on the internet?
I was determined to find out and find out I did.
What followed this research into consensus building on Wikipedia was one of the strangest, disturbing and yet rewarding experiences of my entire life.
This troll farm I encountered while editing Wikipedia was not motivated by money nor corporate interests, this troll farm was managed by activists within organizations that label themselves “skeptics” and operate a type of activism on Wikipedia, MediaWikis, blogging, Youtube and social media sites such as Reddit.
As self declared “skeptics”, they claim to battle online misinformation about science, noice.
But a troll farm is still a troll farm, not nice.
A search for my name will reveal a number one search return on Google as a RationalWiki entry still exists to this day.
Any “SEO” pro will note that this article on RationalWiki about me has five hundred backlinks within RationalWiki pointing back to the article. This, which combined with a high page ranking Google affords MediaWikis, insures that this article remains as a number one search return for my name, something that was optimized by someone within RationalWiki, and someone knowledgeable about gaming Google’s search algorithm.
This reflects a deeper flaw in MediaWiki software that creates a huge vulnerability for the web. Any MediaWiki is easily abused by malformed agents, and they rank fairly decently in Google search.
Many forks of Wikipedia emerged from unresolved “wiki wars”, with the losing side on Wikipedia simply creating their own “MediaWiki” that looks just like Wikipedia, but isn’t. From one wiki war on Wikipedia emerged Conservapedia––which then inspired RationalWiki to form to “battle them and troll them”.
All of this is empowered by MediaWiki platform, all of these created MediaWikis look like they could be Wikipedia, but are not. Governed by anonymous editing accounts and earning high page ranking for operating a wiki––a small handful of individuals can potentially influence thousands, hundreds of thousands even millions of people on the internet, creating causes for reactions that are so complex as to almost be uncountable.
The claim that I was editing Wikipedia as part of my research into consensus building was continually suppressed on Wikipedia over eight years.
While these “skeptics” were discrediting the study to win editing permissions on Wikipedia, the WikiMedia Foundation published “Wikipedia, We Have a Problem” on their own wiki research project into platform harassment, as well as my case study also informed Google Jigsaw’s early LLM model which was testing for harassment on Wikipedia, at the time via their now ex-CTO, Lucas Dixon. WWHP even got one or two footnotes in a few college research papers.
The case study I published began to be distributed by other Wikipedia editors, citing it in disputes on Wikipedia, meaning that the case study, WWHP, began to get shared on Wikipedia talk pages, with a few other Wikipedia editors even questioning Jimbo Wales and directly mentioning it.
This is when a senior Wikipedia admin actually put www.wikipediawehaveaproblem on a censored blacklisted site list, with other Wikipedia editors informing Jimbo Wales that ‘Wikipedia We Have a Problem” is a website that publishes “very fringe views” expressed “far outside of the mainstream.”
RationalWiki is now stuck with an article about me that is world class piece of evidence of online targeted harassment and disinformation as performed by Wikipedia editors, to their own shame, and preserved in the wonderful decentralized architecture of MediaWikis.
Flagwaving; how influence operations succeed by gaming Google SEO and MediaWiki
Designers of Communities: This tactic WORKS in influencing online communities, this tactic your community must solve for.
What is this tactic? Let me show you. So who is Rome Viharo if you were doing a quick search say on Reddit?
That return reads like a page, a quick summary. It is comprised of various headlines of posts that someone went through Reddit and posted, multiple times. It creates its own narrative that piggy backs off of the search return.
This type of strategy I call “flag waving consensus”, it is designed to produce a number of statements about an individual that appear factual because they are supported by links, making the search return function as an “ad hoc” article on someone.
99% of the time, no one bothers to actually check the links and verify the statements with the actual references used.
It’s a very genius strategy, I admit, if you are conducing online influence operations.
Social media sites, MediaWikis, and Google search are all very easily gamed by malformed agents, and any designer seeking to introduce governance systems needs to be aware of how extreme events can take.
This strategy that was employed fooled hundreds if not thousands of people, dozens of online communities, including Wikipedia senior admins, including many journalists and, the irony, many skeptics themselves.
The other disturbing component to this is the sheer reach of its affect that requires a budget of zero dollars. These are orchestrated attacks that “hack” Google search algorithms, which rewards many of these sites with high page rank in spite of many of these platforms having zero recourse or accountability.
What is the solution to this problem on the internet? What is anyone to do if it happens to them?
There is zero solution to this problem on the web.
That’s pretty sobering. What’s more, big tech giants like Youtube and Google attempt to farm out solving for online misinformation to Wikipedia.
Aiki Wiki was born from Wiki Wars and Ideological Conflicts
Aiki Wiki, developed in response to the behaviors observed in the case study, is a discussion threading tool designed for conflict resolution. The structure only allows for pairs of editors to edit an article when they resolve a disagreement or misunderstanding of some kind.
It applies an algorithmic narrative structure to online conversations, transforming chaos into comprehensible narrative events.
This innovative tool provides a structured approach to conflict resolution and promoting healthier online interactions, while also filtering these more problematic events which emerge from breakdowns in governance in an an ever-emerging consensus process.
In my research I wanted a deep dive into whatever prevents a “win-win” collaboration emerging from online consensus discussion and I was able to distill the types of decisions and choices that prevent collaboration from occurring into three distinct arcs that can potentially emerge in conversation.
Heat, Mirrors and Shadows
The “triad of trials” in consensus building, which can be measured in the amount of “heat” generated when two agents in a system confront contradictions with each other, govern consensus systems by following the flow of a personalized conversation that can erupt into personal attacks and escalate into full blown “information warfare” including extreme acts of deception even attempts to subvert the actual system.
Wikipedia’s governance system actually rewards their editors who know how to compete in such an environment which can penalize editors when they attempt to collaborate, hence a breakdown in their governance.
Aiki Wiki resolves the editor permission problem by rewarding pairs of editors who build collaboration, while creating an insurmountable amount of work for any agent that seeks to use the system to compete, subvert, or attack.