March 19, 2014

LinkedIn's Scarlet Letter - Social Media Clarity Podcast

LinkedIn's Scarlet Letter - Episode 14


Marc, Scott, and Randy discuss LinkIn's so-called SWAM (Site Wide Automatic Moderation) policy and Scott provides some tips on moderation system design...

[There is no news this week in order to dig a little deeper into the nature of moderating the commons (aka groups).]

Additional Links:


John Marc Troyer: Hi, this is John Mark Troyer from VMware, and I'm listening to the Social Media Clarity podcast.

Randy: Welcome to episode 14 of the Social Media Clarity podcast. I'm Randy Farmer.

Scott: I'm Scott Moore.

Marc: I'm Marc Smith.

Marc: Increasingly, we're living our lives on social-media platforms in the cloud, and in order to protect themselves, these services are deploying moderation systems, regulations, tools to control spammers and abusive language. These tools are important, but sometimes the design of these tools have unintended consequences. We're going to explore today some choices made by the people at LinkedIn in their Site Wide Automatic Moderation system known as SWAM. The details of this service are interesting, and they have remarkable consequences, so we're going to dig into it as an example of the kinds of choices and services that are already cropping up on all sorts of sites, but this one's particularly interesting because the consequence of losing access to LinkedIn could be quite serious. It's a very professional site.

Scott: SWAM is the unofficial acronym for Site Wide Automated Moderation, and it's been active on LinkedIn for about a year now. Its intent is to reduce spam and other kinds of harassment in LinkedIn groups. It's triggered by a group owner or a group moderator removing the member or blocking the member from the group. The impact that it has is that it becomes site wide. If somebody is blocked in one group, then they are put into what's called moderation in all groups. That means that your posts do not automatically show up when you post, but they go into a moderation queue and have to be approved before the rest of the group can see them.

Randy: Just so I'm clear, being flagged in one group means that none of your posts will appear in any other LinkedIn group without explicit approval from the moderator. Is that correct?

Scott: That's true. Without the explicit approval of the group that you're posting to, your posts will not be seen.

Randy: That's interesting. This reminds me of the Scarlet Letter from American Puritan history. When someone was accused of a crime, specifically adultery, they would be branded so that everyone could tell. Regardless of whether or not they were a party to the adultery, a victim, you were cast out, and this puts a kind of cast-out mechanism, but unlike then, which was an explicit action that the community all knew about, a moderator on a LinkedIn group could do this by accident.

Scott: From a Forbes article in February, someone related the story that they had joined a LinkedIn group that was for women, and despite it having a male group owner and not explicitly stating that the group was for women only. The practice was that if men joined the group and posted, the owner would simply flag the post just as a way of keeping it to being a woman-only group. Well, this has the impact that simply because the rules were not clear and the behavior was not explicit, then this person was basically put into moderation for making pretty much an honest mistake.

Randy: And this person was a member of multiple groups and now their posts would no longer automatically appear. In fact, there's no way to globally turn this off, to undo the damage that was done, so now we have a Scarlet Letter and a non-existent appeals process, and this is all presumably to prevent spam.

Scott: Yeah, supposedly.

Randy: So it has been a year. Has there been any response to the outcry? Have there been any changes?

Scott: Yes. It seems that LinkedIn is taking a review. They've made a few minor changes. The first notable one is that moderation is temporary, so it can last a undetermined amount of time up to a few weeks. The second one is that it seems that they've actually expanded how you can get flagged to include any post, contribution, comments that are marked as spam or flagged as not being relevant to the group.

Randy: That's pretty amazing. First of all, shortening the time frame doesn't really do anything. You're still stuck with a Scarlet Letter, only it fades over months.

Marc: So there's a tension here. System administrators want to create code that essentially is a form of law. They want to legislate a certain kind of behavior, and they want to reduce the cost of people who violate that behavior, and that seems sensible. I think what we're exploring here is unintended consequences and the fact that the design of these systems seem to lack some of the features that previous physical world or legal relationships have had, that you get to know something about your accuser. You get to see some of the evidence against you. You get to appeal. All of these are expensive, and I note that LinkedIn will not tell you who or which group caused you to fall into the moderation status. They feel that there are privacy considerations there. It is a very different legal regime, and it's being imposed in code.

Randy: Yes. What's really a shame is they are trying to innovate here, where in fact there are best practices that avoid these problems. The first order of best practice is to evaluate content, not users. What they should be focusing on is spam detection and behavior modification. Banning or placing into moderation, what they're doing, does neither. It certainly catches a certain class of spammer, but, in fact, the spam itself gets caught by the reporting. Suspending someone automatically from the group they're in or putting them into auto-moderation for that group if they're a spammer should work fine.

Also, doing traffic analysis on this happening in multiple groups in a short period of time is a great way to identify a spammer and to deal with them, but what you don't need to do is involve volunteer moderators in cleaning up the exceptions. They can still get rid of the spammers without involving moderators handling the appeals because, in effect, there is an appeals process. You appeal to every single other group you're in, which is really absurd because you've not done anything wrong there - you may be a heavy contributor there. We've done this numerous places: I've mentioned before on the podcast my book Building Web Reputation Systems. Chapter 10 describes how we eliminated spam from Yahoo Groups without banning anyone.

Marc: I would point us to the work of Elinor Ostrom, an economist and social theorist, who explored the ways that groups of people can manage each other's behavior without necessarily imposing draconian rules. Interestingly, she came up with eight basic rules for managing the commons, which I think is a good metaphor for what these LinkedIn discussion groups are.

  1. One is that there is a need to "Define clear group boundaries." You have to know who's in the group and who's not in the group. In this regard, services like LinkedIn work very well. It's very clear that you are either a member or not a member.
  2. Rule number two, "Match rules governing use of common goods to local needs and conditions." Well, we've just violated that one. What didn't get customized to each group is how they use the ban hammer. What I think is happening that comes up in the stories where you realize somebody has been caught in the gears of this mechanism is that people have different understandings of the meaning of the ban hammer. Some of them are just trying to sweep out what they think of as just old content, and what they've just done is smeared a dozen people with a tar that will follow them around LinkedIn.
  3. Three is that people should "Ensure that those affected by the rules can participate in modifying the rules." I agree that people have a culture in these groups, and they can modify the rules of that culture, but they aren't being given the options to tune how the mechanisms are applied and what the consequences of those mechanisms are. What if I want to apply the ban hammer and not have it ripple out to all the other groups you're a member of?

    Randy: Well, and that's section four.

  4. Marc: Indeed, which reads, "Make sure the rule-making rights of community members are respected by outside authorities." There should be a kind of federal system in which group managers and group members choose which set of rules they want to live under, but interestingly,
  5. number five really speaks to the issue at hand. "Develop a system carried out by community members for monitoring members' behavior."

    Randy: I would even refine that a little bit online, which is to not only monitor, but to help shape members' behavior so that people are helping people conform to their community.

  6. Marc: Indeed, because this really ties into the next one, which may be the real problem here at the core. "Use graduated sanctions for rule violators." That seems not to be in effect here with the LinkedIn system. You can make a small mistake in one place and essentially have the maximal penalty applied to you. I'm going to suggest that number seven also underscores your larger theme, which is about shaping behavior rather than canceling out behavior.
  7. Number seven is, "Provide accessible low-cost means for dispute resolution", which is to say bring the violators back into the fold. Don't just lock them up and shun them.

    Randy: Specifically on dispute resolution, which includes an appeals process, for Yahoo Answers, we implemented one which was almost 100% reliable in discovering who a spammer was. If someone had a post hidden, an email would be sent to the registered email address saying, "Your post has been hidden," and takes you through the process for potentially appealing. Now, what was interesting is if the email arrived at a real human being, it was an opportunity to help them improve their behavior. If they could edit, they could repost.

    For example, this is what we do at if you get one of these warnings. You are actually allowed to edit the offensive post and repost it with no penalties. The idea is to improve the quality of the interaction. It turns out that all spammers, to a first approximation on Yahoo Answers, had bogus email addresses, so the appeal would never be processed and the object would stay hidden.

  8. Well, I'm going to do number eight, and eight says, "Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system." It doesn't say let the entire interconnected system have one rule that binds them all.

    Randy: And it also says from the bottom up. I actually approve of users marking postings as spam and having that content hidden and moving some reputation around. Where we run into trouble is when that signal is amplified by moving it up the interconnected system and then re-propagated across the system. The only people who have to know whether or not someone's a spammer is the company LinkedIn. No other moderator needs to know. Either the content is good or it's not good.

Marc: Elinor Ostrom's work is really exciting, and she certainly deserved the Nobel Prize for it because she really is the empirical answer to that belief that anything that is owned by all is valued by none. That's a phrase that leads people to dismiss the idea of a commons, to believe that it's not possible to ethically and efficiently steward something that's actually open, public, a common resource, and of course, the internet is filled with these common resources. Wikipedia is a common resource. A message board is a common resource.

Like the commonses that Ostrom studied, a lot of them are subject to abuse, but what Ostrom found was that there were institutions that made certain kinds of commons relationships more resilient in the face of abuse, and she enumerated eight of them. I think the real message is that, given an opportunity, people can collectively manage valuable resources and give themselves better resources as a result by effectively managing the inevitable deviance, the marginal cases where people are trying to make trouble, but most people are good.

Scott: Your tips for this episode are aimed at community designers and developers who are building platforms that allow users to form their own groups.

  1. First, push the power down - empower local control and keep the consequences local.
  2. Give group owners the freedom to establish and enforce their own rules for civil discourse.
  3. You will still be able to keep content and behavior within your service's overall terms of use and allow a diversity of culture within different groups.
  4. If, as a service, you detect broader patterns of (content or user) behavior, you can take additional action. But respect that different groups may prefer different behaviors, so be careful to not allow one or even a small set of groups dictate consequences that impact all other groups.
  5. Now that we are giving local control, be sure to allow groups to moderate content separately from moderating members.
  6. As often as not, good members sometimes misstep and make bad posts. Especially, if they are new to a group.
  7. Punishing someone outright can cost communities future valuable members.
  8. By separating content from members, the offending content can be dealt with and the member help to fit the local norms.
  9. Ask community managers and you will hear stories of a member who started off on the wrong foot and eventually became a valued member of their community. This is common. Help group moderators avoid punishing people who make honest mistakes.
  10. When it comes to dispute resolution between members and group moderators. One way to make it easy is to mitigate the potential dispute in the first place.
  11. Make it easy for moderators to set behavior expectations by posting their local rules and guidelines and build in space in your design where local rules can be easily accessed by members.
  12. Also give group owners the option of requiring an agreement to the local rules before a member is allowed to join the group.
  13. AND Make it easy to contact moderators before a member posts and encourage them to ask about posts before even posting.
  14. NOW If the group platform offers a moderation queue, give clear notifications to moderators about pending posts so reviewing the queue is easier to include in their work-flow. Because moderating communities does have a work-flow.
  15. And finally, build a community of group owners and moderators -- and LISTEN to them as they make recommendations and request tools that help them foster their own local communities. The more you help them build successful communities, the more successful your service or platform will be.

Randy: That was a great discussion. We'd like the people at LinkedIn to know that we're all available as consultants if you need help with any of these problems.

Marc: Yeah, we'll fix that for you.

Randy: We'll sign off for now. Catch you guys later. Bye.

Scott: Good-bye.

Marc: Bye-bye.

[Please make comments over on the podcast's episode page.]

February 21, 2014

Five Questions for Selecting an Online Community Platform

Today, we're proud to announce a project that's been in the works for a while: A collaboration with Community Pioneer F. Randall Farmer to produce this exclusive white paper - "Five Questions for Selecting an Online Community Platform." 
Randy is co-host of the Social Media Clarity podcast, a prolific social media innovator, and literally co-wrote the book on Building Web Reputation Systems. We were very excited to bring him on board for this much needed project. While there are numerous books, blogs, and white papers out there to help Community Managers grow and manage their communities, there's no true guide to how to pick the right kind of platform for your community. In this white paper, Randy has developed five key questions that can help determine what platform suits your community best. This platform agnostic guide covers top level content permissions, contributor identity, community size, costs, and infrastructure. It truly is the first guide of its kind and we're delighted to share it with you.
Go to the Cultivating Community post to get the paper.

October 01, 2013

Social Networks, Identity, Psudonyms, & Influence Podcast Episodes

Here are the first 4 episodes of The Social Media Clarity Podcast:

  1. Social Network: What is it, and where do I get one? (mp3) 26 Aug 2013
  2. HuffPo, Identity, and Abuse (mp3) 5 Sep 2013  NEW
  3. Save our Pseudonyms! (Guest: Dr. Bernie Hogan) (mp3) 16 Sep 2013  NEW
  4. Influence is a Graph (mp3) 30 Sep 2013  NEW
Subscribe via iTunes

Subscribe via RSS

Listen on Stitcher

Like us on Facebook

August 26, 2013

Follow Us Over to the Social Media Clarity Podcast

We're gettin' the band back together! Your friendly BWRS authors are reunited on a brand new podcast, aimed at designers, product managers and producers of social platforms and products.

Social Media Clarity will be a regular podcast: "15 minutes of concentrated analysis and advice about social media in platform and product design." Joining us is Marc Smith.

We're all really pleased with how the first episode has turned out. We discuss:

  • Rumors that FB will soon start throttling OpenGraph and API usage for 3d parties
  • A round-table discussion: does my product need its own social networking capabilities?
  • A practical tip at the end, an intro to NodeXL

Check it out, won't you? You can subscribe to the series (soon) through iTunes, or now at

January 24, 2011

A Review for programmers

A review aimed at engineers just went up over at

Building Web Reputation Systems
Author: Randy Farmer and Bryce Glass
Publisher: O'Reilly, 2010
Pages: 336
ISBN: 978-0596159795
Aimed at: Web designers and developers who want to incorporate feedback
Rating: 4
Pros: Valuable advice based on real experience
Cons: Could be improved by a different order of chapters
Reviewed by: Lucy Black

...The book concludes with a real-life case study based on Yahoo! Answers Community Content Moderation. This makes interesting reading and gives a context for what has gone before. It left me wondering whether I might have got more from the rest of the book had I read it first - but of course with this type of book you wont just read once and set aside. You'll refer to it for help as the need arises - and there is an index that will help you locate specific information.

At the end of the day I realised I'd gleaned a lot of useful and practical advice but it would have been an easier experience with just a little reorganisation of the material.

January 13, 2011

New Book Review of Building Web Reputation Systems

Architecture, SOA, BPM, EAI, Cloud has a review of Building Web Reputation Systems...

"...Book is light read but certainly deserve an attentive read and particularly from product designers and who ever involved in product conceptualization..."

It also contains a great set of book related links...

November 16, 2010

Quora:What lessons of Social Web do you wish had been better integrated into Yahoo?

On Quora, an anonymous user asked me the following question:

In hindsight, what lessons have you learned from the Social Web that you wish you had been more successful at integrating into Yahoo before you were let go?

I considered this question at length when composing this reply - this is probably the most thought-provoking question I've been asked to publicly address in months.

If you read any of my blog posts (or my recent book), you already know that I've got a lot of opinions about how the Social Web works: I rant often about identity, reputation, karma, community management, social application design, and business models.

I did these same things during my time for and at Yahoo!

We invented/improved user-status sharing (what later became known as Facebook Newsfeeds) when we created Yahoo! 360° [Despite Facebook's recently granted patent, we have prior art in the form of an earlier patent application and the evidence of an earlier public implementation.]

But 360 was prematurely abandoned in favor of a doomed-from-the-start experiment called Yahoo!Mash. It failed out of the gate because the idea was driven not by research, but personality. But we had hope in the form of the Yahoo! Open Strategy, which promised a new profile full of social media features, deeply integrated with other social sites from the very beginning. After a year of development - Surprise! - Yahoo! flubbed that implementation as well. In four attempts (Profiles, 360, Mash, YOS) they'd only had one marginal success (360), which they sabotaged several times by telling users over and over that the service was being shut down and replaced with inferior functionality. Game over for profiles.

We created a reputation platform and deployed successful reputation models in various places on Yahoo! to decrease operational costs and to identify the best content for search results and to be featured on property home pages [See: The Building Web Reputation Systems Wiki and search for Yahoo to read more.]

The process of integrating with the reputation platform required product management support, but almost immediately after my departure the platform was shipped off to Bangalore to be sunsetted. Ironically, since then the folks at Yahoo! are thinking about building a new reputation platform - since reputation is obviously important, and everyone from the original team has either left, been laid off, or moved on to other teams. Again, this will be the fourth implementation of a reputation platform...

Are you sensing a pattern yet?

Then there's identity. The tripartite identity model I've blogged about was developed while at Yahoo an attempt to explain why it is brain-dead to ask users to reveal their IM name, their email address, and half their login credentials to spammers in order to leave a review of a hotel.

Again we built a massively scalable identity service platform to allow users to be seen as their nickname, age, and location instead of their YID. And again, Yahoo! failed to deploy properly. Despite a cross-company VP-level mandate, each individual business unit silo dragged their heels in doing the (non-trivial, but important and relatively easy) work of integrating the platform. Those BUs knew the truth of Yahoo! - if you delay long enough, any platform change will lose its support when the driving folks leave or are reassigned. So - most properties on Yahoo! are still displaying YIDs and getting up to 90% fewer user contributions as a result.

That's what I learned: Yahoo! can't innovate in Social Media. It has a long history in this, from Yahoo! Groups, which during my tenure had three separate web 2.0 re-designs, with each tossed on the floor in favor of cheap and easy (and useless) integrations (like with Yahoo! Answers) to Flickr, Upcoming, and Delicious. I'm sad to say, Yahoo! seems incapable of reprogramming its DNA, despite regular infusions of new blood. Each attempt ends in either an immune-response (Flickr has its own offices, and a fairly well known disdain for Sunnyvale) or assimilation and decreasing relevance (HotJobs, Personals, Groups, etc.).

So, in the end, I find I can't answer the question. I was one of many people who tried to drive home the lessons of the social web for the entire time I was there. YOS (of which I helped spec in fall 2007) was the last attempt to reshape the company to be social through and through. But, it was a lost cause - the very structure of the environment is personality driven. When those personalities leave, their projects immediately get transferred to Bangalore for end-of-life support, just as much of YOS has been...

I don't know what Yahoo! is anymore, but I know it isn't inventing the future of social anything.

[As I sat through this years F8 developers conference, and listen to Mark Z describe 95% of the YOS design, almost 3 years later, I knew I'd have to write this missive one day. So thanks for the prodding , Anonymous @ Quora]

Randy Farmer
Social Media Consultant, MSB Associates
Former Community Strategy Analyst for Yahoo!

[Please direct comments to Quora]

October 12, 2010

First! Randy to be the kickoff guest for new Community Chat podcast series.

Bill Johnston and Thomas Knolls are launching a new live podcast series: Community Chat on talkshoe.

I am so honored to be the lead-off guest on their inagural episode (Wednesday 10-13-10):

The kickoff episode of Community Chat! [We] will be discussing the premise of the Community Chat podcast with special guest Randy Farmer. Will also be getting a preview of Blog World Expo from Check Hemann.

I'll be talking with them about online community issues developers and operators all share in common - well, as much as I can in 10 minutes. :-) Click on the widget above to go there - it will be recorded for those who missed it live...

UPDATE The widget is now has an option to play back the session. Just choose "Kickoff" and press play. :-)

September 29, 2010

BWRS on Kindle Web - Try before you buy!

You can now read the Kindle edition of Building Web Reputation Systems on the web (search, print, etc.) and it is much cheaper than the paper version. Here's the free sample:

August 25, 2010

Oct-06-10 SVPMA Talk: Web Reputations: Putting Social Media to Work in Your Products

On October 6th, Randy will be presenting Web Reputations: Putting Social Media to Work in Your Products at the Silicon Valley Product Managers Association:

While social media were originally focused on consumers, product managers in every segment are wondering how to deal with this shift to customer interaction and communities of interest. We’re crowdsourcing ideas for our B2B products, putting up community self-support sites, and tweeting our updates. We surf user-generated content on Facebook and LinkedIn. Anonymous posts rate our products against the competition. Customer groups that love us - and hate us - are organizing on their own.

Hidden in social media are problems of web reputation: how to tell good stuff from bad, how to engage and reward contributors, scale up rating systems, and stamp out inappropriate content. Web reputations are a source of social power. As product managers, we need to understand the reputation and reward systems we put in place when we add social networking to our products/services.  This talk with provide you with the sample criteria you must think about when creating a social media strategy for your product.It will identify the five most common mistakes product designers and product managers make when considering adding reputation, ratings and reviews to their applications It will provide you with intellectual tools needed to avoid these pitfalls as well as teach you how to think effectively about the alternatives. For example, questions like “Is Like always better than Ratings?”  “When should I use thumbs-down?” and “Can I re-purpose reputation?” will be discussed.

It will be a variant of the Reputation Missteps talk - targeted at product managers. Non-members can attend for a nominal fee. This article will be updated with a registration link when it becomes available.