It's no secret that Facebook is used as a platform for propaganda in all its various guises. You only have to look at the sponsored ads in your sidebar to be hit by targeted marketing. Then, of course, there's the user generated propaganda; I scroll down my newsfeed daily to see someone posting an angry status about government cuts, for example.
In recent weeks, Facebook bosses have been talking about the medium's political potential more publicly. At Davos on Wednesday, the Facebook COO and author of Lean In: Women, Work, and the Will to Lead, Sheryl Sandberg, expressed her belief that Facebook can and should be used to combat certain types of propaganda with what she called "'like' attacks".
According to the Guardian, Sandberg said that the best thing to speak out against recruitment by the Islamic State are "the voices of people who were recruited by ISIS, understand what the true experience is, have escaped and have come back to tell the truth..." hitting on Facebook as a way to share information and support.
Sandberg used the example of German Facebook users who “liked” the Facebook page of a neo-Nazi party in order to post messages promoting inclusion on the page. “What was a page filled with hatred and intolerance was then tolerance and messages of hope." According to Adweek, more than 10,000 protestors got involved.
These are examples of what Sandberg called "counter-speech": users making an active effort to submerge hate speech in altogether more positive or useful messaging. The German campaign presumably took one individual or organisation's initiative to get the ball rolling, with others joining in. It got "7 million impressions", which is a lot of awareness-raising for anti-fascist propagandists.
This story, for me, harks back to June last year, when it was reported that 26 million Facebook users had filtered their profile picture with a rainbow flag to stand in solidarity with gay marriage. These stats exist because the filter was a tool rolled out by Facebook themselves, just like the French flag tool allowing users to support Paris following the November 2015 attacks.
The flags sparked countless think pieces on the topic of clicktivism, which refers to supporting a cause online. Some said they were "slacktivism " (lazy activism), and a writer in the Independent went so far as to describe them as a symbol of "corporate white supremacy". Most articles that come up on Google seem to just explain how to get rid of the flag once you've lost interest in the filter.
A lot of discussions around the issue of clicktivism ignore the point that it doesn't mean the promotion of a cause is confined exclusively to the web. The flags are about awareness raising, but they're not enough to implement change. As Malcolm Gladwell's New Yorker article entitled "Small Change" suggests, social media activism appears to work because it "doesn't ask much of you", and is best when combined with more "high-risk" tactics.
We know that petitions signed online will get issues raised in parliament, as we recently saw with the "stop Donald Trump entering the UK" petition, which was brought up at Westminster. Sharing event pages for protests will also let people know when they need to show their face at a demonstration on the street, hence the great turn out at the protest against Nurses' bursary cuts in London recently.
And yet, while we need to remember that online support feeds into direct action and can help to bring about change – we also need to remember that what we see on our social media platforms is blinkered – namely by who we are friends with, Facebook's algorithms (designed to show you more of the kinds of posts you like) and because of paid promotion. As Gladwell reminds us: "Our acquaintances – not our friends – are our greatest source of new ideas and information."
It's not exactly as "open" as other parts of the web when it comes to the sharing of opinions, and tech firms like Facebook have been notably cautious in how they discuss the limitations of what we're allowed to say and see on their mediums. Facebook CEO Mark Zuckerberg has been repeatedly hammered for contradictions between his statements on free speech and Facebook's community standards, for example.
Two weeks ago, however, news broke that Silicon Valley execs sat down with American security staff and senior law enforcement bodies to talk about how they are going to deal with radicalisation online. It wasn't the first time the US government have met with the top heads in tech – but it was remarkably well attended, with Twitter, Apple, Facebook and Microsoft bosses in attendance.
Sandberg, who was at the meeting, reportedly explained how Facebook implemented a tool to allow users to flag up a friend who is showing signs that they might be suicidal. A sort of "warning" tool, rather than a "like" tool. According to the Guardian, this triggered a discussion about whether the same system could be used to flag users showing signs of radicalisation.
Whether or not the idea gets implemented remains to be seen, but it sounds like a useful type of flag to rollout in the offensive against terrorist propaganda online. And the question that resounds from the tech talks had in recent weeks, is not which will win out as the most effective forms of activism – "likes" of support, "counterspeech", or "warning flags" – but rather who gets to decide what kind of online speech is agreeable or disagreeable anyway.
Like what you see? How about some more R29 goodness, right here?
The Scandal Over Toby Young's Deleted Sexist Tweets & Why You Should Care
Council Calls For Removal Of Homeless People In Windsor Ahead Of Royal Wedding
House Prices In These London Areas Are Dropping Because Of Brexit