Fake reviews, trust, and why managing digital communities is really, really hard

I helped to run blogging community Blogcritics back in the day. My friend, founder Eric Olsen, is a master at managing online communities. He makes it look easy, using a mix of relentless engagement, aplomb, enthusiasm, and occasional sternness to keep the restless and vocal hordes of bloggy commenters and commenting bloggers in line (and online).

I bring this up because managing online communities is really, really hard. It was hard when the Internet was a little bit younger and smaller and more innocent, pre-algorithms and bots and fake news. And it’s vastly more difficult today.

I’m as harsh a critic of Facebook and other social platforms as anyone, but it’s important to keep in context that the issues that they’re trying to address are incredibly difficult even with massive brain power and resources thrown at combating them.

Here’s a good example: Amazon has a serious “fake product reviews” problem.

  • The fake Amazon review economy is a thriving market, ripe with underground forums, “How To Game The Rankings!” tutorials, and websites with names like (now-defunct) “amazonverifiedreviews.com.”But the favored hunting grounds for sellers on the prowl is Amazon’s fellow tech behemoth, Facebook.

    In a recent two-week period, I identified more than 150 private Facebook groups where sellers openly exchange free products (and, in many cases, commissions) for 5-star reviews, sans disclosures.

(Full disclosure here: I semi-recently had some early conversations with Amazon about a digital product gig in which the role would “incubate” a solution to the fake review problem and then evangelize it to executives within the company with the goal of rolling it out more widely. I thought it was… strange that they wanted to bring in someone from outside the company in a non-executive capacity to tackle a problem that is and will be incredibly challenging to solve.)

Twitter (my favorite social media product) has plenty of its own issues to deal with in terms of policing bad actors (including, some would argue, a certain head of the executive brand of the U.S. government), protecting people from harassment, and blocking bad people from doing bad things. Recently, “CEO Jack Dorsey said the company is looking to change the focus from following specific individuals to tracking topics of interest, a significant shift from the way the service has always operated.”  Dorsey notes that “what’s incentivized today on the service is at odds with the goal of healthy dialogue.”  If this is executed, it will fundamentally change the way that Twitter works. It remains unclear if this will result in the desired impact, but Twitter is at least discussing fairly radical changes to its core product experience.

Then there are times when social media companies actively do the opposite of helping:

  • #Linkedin is becoming scary with fake connect requests being sent, making you think the other person has genuinely sent the invite. Only later realising that Linkedin is playing the users by auto generating the requests.

Not sure if that one is real or some spammy thing, but it does speak to the increasingly uneasy relationship with social media products that many of us have.

Then there are times when companies do things to make you scratch your head and wonder what in the world they could have been thinking.

  • Facebook’s controversial factchecking program is partnering with the Daily Caller, a rightwing website that has pushed misinformation and is known for pro-Trump content.

For what it’s worth, here’s what Facebook is saying here:

  • Asked about its collaboration with the Daily Caller, a Facebook spokesperson noted that any news organization can apply to join the program after it gains certification from the non-partisan International Fact-Checking Network, run by the journalism institute Poynter. Poynter could not immediately be reached.

And speaking of Facebook, there’s quite a read from Wired, called “15 Months of Hell Inside Facebook.” You get fun pull quotes like this, for example:

  • The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.

The Wired story starts off with a George Soros quote from Davos’ World Economic Forum, where he says:

  • Mining and oil companies exploit the physical environment; social media companies exploit the social environment

Fear about how Facebook (and, to be fair, other social media and tech companies) uses and at times exploits user data and privacy has an impact on every new initiative the company attempts.

When this story came out — “Facebook is working on a voice assistant to rival Amazon Alexa and Apple Siri” — Drew Olanoff (one of my favorite follows on Twitter) responded with:

i mean seriously. here are the things facebook would now like us to trust them with, even though they haven’t properly addressed privacy issues, etc. etc. etc.

– our eyeballs
– our homes/offices
– our voice

yeahhhhhhNOPE

Techcrunch rips into Facebook for simply offering a sale(!) on its Portal product with, “Facebook’s Portal will now surveil your living room for half the price.”

  • No, you’re not misremembering the details from that young adult dystopian fiction you’re reading — Facebook really does sell a video chat camera adept at tracking the faces of you and your loved ones.

So, yes, these problems are incredibly difficult to solve for, but Facebook in particular has done a good job of deserving “blistering external criticism” along the way. It’s no wonder that Mark Zuckerberg has spent recent months attempting to reinvent himself as an advocate for stronger Internet privacy and election laws.

Or, as a piece from The Ringer succinctly puts it:

  • The company’s motto used to be “move fast and break things”; now it might be “move fast and fix reputation.”

This post originally appeared in what had originally been called The Eric Berlin E-mail Newsletter. To get a weekly blast of pop culture, digital media, and politics that helps make sense of an increasingly frazzled world, sign on up for The Berlin Files here.

On digital communities, Blogcritics, and Reddit’s CMV

Many years ago, I helped run a blogging community called Blogcritics. Much like this newsletter, the topics on Blogcritics tended to have a pop culture tilt (indeed, the site was founded on the notion that bloggers could get to free CDs, DVDs, and books in exchange for critical reviews) but also covered just about everything, including a goodly dose of politics.

In an online community with a base of bloggers at its core, you can bet that the comments underneath articles could get awfully lively at times. But it was a rare place where people from different political and social views could come together and discuss things amicably (at least usually). My mentor, friend, and business partner Eric Olsen was a master of policing the occasional offender of the site’s pretty loose policies with a fine sense of engagement, diplomacy, and generosity. Rarely, someone would get booted from the site but even then, Eric would often let them back in after a short “time out” with promises accepted of better behavior in the future.

I mention all of this because while I recognized that Blogcritics was a special place that showcased the power of good that technology and online communities can provide, I had no idea of just how rare it was and of how difficult reigning in the toxic elements of many online communities and social networks could be.

I’m a big fan of Twitter, even though I understand some of the perils of engaging with people in a digital open forum, some hiding behind an anonymous profile. I’ve spent years curating the people I follow, and I liberally block those who I believe are toxic to me and the community (one fun little game I play is blocking nasty and immature commenters of a beat sports writer I follow).

This curation works for me, but it also protects me from some of the darker elements. I also happen to be a guy. So therefore I was a little disturbed to learn that something known as the “Twitter reply guy” is a thing.

Reddit, which has developed into a popular hub comprised of hundreds if not thousands of “sub-reddit” communities, is well known as a digital presence that can get particularly nasty, depending on the sub-reddit neighborhood that you wander into.

That all leads to my finding this The Next Web piece that covers the emergence of a sub-reddit called Change My View (CMV or r/changemyview in reddit parlance).  Change My View describes itself as follows:

  • A place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue. Enter with a mindset for conversation, not debate.

CMV is seeking to leverage both technology and human curation to help foster a healthier community:

  • CMV gamifies this healthy conversation in many ways. The first is through the use of a DeltaBot, which calculates awarded deltas and updates a leaderboard, called a deltaboard, where necessary. Redditors can monitor their standing on the deltaboard located in the sidebar next to each post. But what makes the subreddit tick is careful moderation. One moderator, who preferred we refer to her by her Reddit username (u/convoces), said that the system relies on a robust set of rules. There are five for submissions and five rules for comments, each “designed to encourage productive discourse and heavy moderation.”

Indeed, when I poked around CMV I was pleasantly surprised at the sense of decorum going on. Here’s the first comment I noticed about a post talking about immigration:

  • “You have the right idea, but you are missing the big reason why it hasn’t worked so far, and is unlikely to work anytime soon.”

This part of the policy gets really intriguing:

  • Two of the more interesting guidelines are that comments must challenge at least one aspect of the original poster’s view, or ask a clarifying question. Neutral stances or simple agreement don’t add to the conversation. Nor do threats of harm or self-promotion.

There has long been talk about there’s no way to avoid the “lowest common denominator” within online communities. To whit, I worked on the digital side of some of the biggest newspapers in the U.S., where the attitude toward the potential for online conversations with and among readers was ambivalent best and, honestly, often contemptuous.

Who knows if the policies and associated algorithms that Reddit is trying out will be the “answer,” but it’s heartening to see new attempts being made, and equally heartening to see a community — even if it’s one sub-reddit for now — buying into it.

This post originally appeared in what had originally been called The Eric Berlin E-mail Newsletter. To get a weekly blast of pop culture, digital media, and politics that helps make sense of an increasingly frazzled world, sign on up for The Berlin Files here.