AllFacebook InsideFacebook InsideMobileApps InsideSocialGames SocialTimes LostRemote TVNewser TVSpy AgencySpy PRNewser MediaJobsDaily UnBeige

Coming Soon To Twitter: Self-Censorship And Parental Controls For NSFW Tweets

Twitter might soon start to flag tweets that contain links to material deemed inappropriate or not safe for work.

There’s a new field within Twitter’s API that is titled “possibly_sensitive” that will only surface when any URL contained in a tweet links to content that has been defined as ‘sensitive’.

So here’s the big question: who gets to make that definition?

Well, maybe you. And me. And everybody else. Colour me confident.

The update was revealed by developer Taylor Singletary on the Twitter Developers forum.

Beginning today you may notice a new boolean field in API responses & streams containing tweets: “possibly_sensitive”. This new field will only surface when a tweet contains a link. The meaning of the field doesn’t pertain to the tweet content itself, but instead it is an indicator that the URL contained in the tweet may contain content or media identified as sensitive content. During this initial testing phase, there’s nothing you need to do with this field and the field values cannot be relied on for accuracy. In the future, we’ll have a family of additional API methods & fields for handling end-user “media settings” and possibly sensitive content.

TechCrunch spoke to Twitter PR rep Carolyn Penner who says that Twitter users will be responsible for flagging their own content as potentially sensitive, which will flash up a warning to other users accessing the media.

”We want to make sure that users are having a good photos experience, while giving them  control over what they’re going to see,” says Penner.

Users already have the option in their settings to hide or display any NSFW (Not Suitable For Work) media, which will give parents a way to control what their children are able to see on Twitter. You can also set your profile so that all of your shared media is marked as NSFW.

It’s unclear if repeatedly flagged tweets will be dealt with in a manner similar to spam (i.e., removed or hidden).

Twitter says this won’t be used to censor content, but however you want to look at it that’s pretty much what this is, even if the outcome will only impact on a per-user basis (depending on set-up). And the problem with user-controlled systems like this is the same as it’s always been: one man’s David is another man’s pornography. I’ve never been convinced that an on/off switch that controls what people see on the internet is an effective measure, both because people (and algorithms) make mistakes and most kids can easily get around any such limits.

Bottom line: I am rarely concerned with what people don’t want to see on Twitter, or anywhere else. I am, however, very concerned about those same  people telling me what I can or cannot see.

Twitter’s stance on tweet censorship is well documented – namely, they don’t do it – and they’ve been very bold in the way that they have continued to support freedom of expression. But that’s a different thing to letting users decide what should and shouldn’t be seen on the platform. You’d like to think the collective intelligence will ensure that the fringe doesn’t have too much of a say, but wisdom doesn’t always come in crowds. It’ll be interesting to see how this one plays out.

(Source: TechCrunch.)

Mediabistro Course

Web Analytics

Web AnalyticsStarting July 30, master Google Analytics to build traffic and increase sales for your brand! In this course, you'll learn how to use metrics to develop a digital strategy for your business, determine what to look for in analytics reports, use your findings to improve your online initiatives and more. Register now!