Khan & Warren vs Facebook – Round One
According to the Guardian, Frank Warren and Amir Khan are threatening to sue Facebook over derogatory comments made about them on the social network.
Facebook allows any user to set up private or public groups. You can go onto Facebook right now and search for Johnny Depp, which will bring back 500 groups dedicated to the actor, but these user groups can just as easily be set up to deride people – like the recent example of the Dixons employees who set up a Facebook group and proceeded to make fun of Dixons customers.
Amir Khan and Frank Warren are not objecting to fan pages, but to abusive and racist comments posted on Facebook. Facebook has a policy of removing “abusive, vulgar, hateful or racially and ethnically objectionable” comments which violate its terms, but, as the Guardian points out, the sheer volume of content makes this a difficult task, relying on users to police comments themselves. It also states that this language would not be accepted in a newspaper.
The ease with which someone can set up a group or post a comment which is specifically designed to bully another person is frightening. Unlike Amir and Frank, most people who fall victim to this kind of behaviour do not have the financial means to threaten legal action unless the content is removed.
The newspaper analogy is an interesting one. The Daily Mail recently announced that it was not going to pre-moderate user comments, but rely on users to flag abusive content that the paper will then assess and remove if necessary. (In my view, by doing this, the paper risks its reputation by association with abusive user comment.) But the fact is that Facebook is not a newspaper: it is a ‘social utility’, and merely provides a conduit for individuals to publish their own material and thus could not be viewed as a ‘publisher’ in the same way as a newspaper. However, it does have also some responsibility for its content.
To be fair, the fact that it has a user policy at all means that it is going some way to realising this responsibility. Facebook terms state: “You will not post content that is hateful, threatening, pornographic, or that contains nudity or graphic or gratuitous violence.” But just having a policy is not enough. From a moral standpoint, you have to implement it, and relying on user reports obviously isn’t enough: if a group is set up specifically with the aim of abuse, members of that group are unlikely to report abusive content. I accept that it would be prohibitively expensive and against the whole set-up to pre-moderate the whole of Facebook’s content, and indeed, the legal defence under Section 1 of the UK’s defamation Act 1996, or the ‘European hosting defence’ would rely on material NOT being pre-moderated by the human eye. But the development of sophisticated filters means that it is now possible to automate moderation of abusive or illegal content, and set up a ‘warning’ system where potentially harmful content could be passed to a moderator to assess and take appropriate action. This would surely be construed as application of “duties of care, which can reasonably be expected .. in order to detect and prevent certain types of illegal activities”i rather than re-classifying Facebook as a publisher responsible for all content on its sites.
i) Recital 48 of European Directive on electronic commerce (2000/21/EC)