AgencySpy LostRemote TVNewser TVSpy FishbowlNY FishbowlDC GalleyCat SocialTimes

Crisp Thinking Promises to Prevent Your Worst Social Media Nightmares

shutterstock_84146302

We’ve already established the fact that social media screwups can affect even the most infallible among us.

Such failures may be an accepted risk of doing business in the digital realm–but wouldn’t you like a little more in the way of security to ensure that your client doesn’t pull a U.S. Airways?

UK-based company Crisp Thinking started as a provider of child protection technologies for Internet service providers, but its latest product promises to deliver the unthinkable by protecting your company and your clients from the kind of missteps that can quickly go viral–and ensure days, if not weeks, of terrible headlines.

Curious? We spoke to founder Adam Hildreth for more details.

Could you describe your basic service for the layman?

Our current service offers 24/7 oversight in terms of reviewing all inbound content posted on a brand’s social media pages. We collect all the content from their own channels in real time by using a combination of advanced tech and human supervision, and we define all content using 43 general categories: spam, offensive materials, etc. We also tailor categories for clients.

This combination of automation and human moderation allows us to categorize everything posted to a page within 15 minutes in 50 languages.

How does it differ from similar moderation services?

Other products use keywords but don’t understand the context: is this really a bomb threat? Does this comment qualify as “offensive content?”

Does this not fall under the purvey of a social media/community manager?

The job of a community manager should be to engage positively with people, but in many companies, the manager is also the moderator. No one can work 24/7, so content gets missed and crises escalate quickly.

To what degree can you customize the service for individual clients?

We use an advanced filter, but our team reviews everything anyway, and we tailor it to different clients. Here’s an example: we can track social media comments about a battery overheating on a certain model of mobile phone in a certain region.

If a brand is going through a social crisis, we specifically categorize everything related to that crisis as positive or negative and inform the brand.

But your new product turns that equation on its head by monitoring outbound content, correct?

Yes. The question is: how do you control outbound messaging and, for example, stop a customer service agent from posting something (whether deliberately or by mistake) that’s not acceptable for the brand?

With customer service, there’s very rarely an approval process. We put a person in the middle to review but do it so quickly that you’d never know it happened, categorizing every statement and stopping “issue-based communications” from going out.

To whom do you bounce these problem messages?

That depends on both the customer and the type of content. The standard approach is that we stop it from ever getting posted and let the client know. In more extreme cases, we stop it from going out, then contact the head of comms/services according to brand guidelines.

How do you pitch your service to, say, U.S. Airways?

Brands are very scared about what their employees say online despite a need to engage with customers.

It comes down to: who’s controlling what your employees are saying in the moment? You could be fine for years with a traditional community manager, but it only takes one accidental copy and paste or one disgruntled employee to make you the next brand trending on Twitter.

What sorts of clients do you serve, and how many messages do you review and remove?

We have lots of media broadcasters, a major airline, alcoholic beverage brands, mobile phone makers, etc. But this model is applicable to every single industry.

We view millions of messages. For some of our clients, we remove up to 20% of inbound content.

For example: for certain high-end fashion brands, links from counterfeit goods spammers amount to about 35% of comments on Facebook posts related to campaign launches.

What’s the most common type of content you “catch?”

We most often deal with alerting clients to consumer complaints made on their social pages.

In terms of extreme threats, the most common one is customers posting suicide notes on a brand’s Facebook page. We deal with that every day.

The second biggest problem is threats to staff or even terroristic threats.

We also see a lot of celebrity complaints: someone with lots of followers making complaints against a brand. We can spot that within minutes, and if the brand can respond quickly then they may have a happy celebrity.

Should agencies that specialize in social feel threatened by your service?

Not unless a person literally wants to sit and review everything 24/7.

For example, we told a social media PR manager for a major airline about an event that was trending related to the company at 2 AM; we literally picked up the phone within 6 minutes of it happening, woke him up and told him that he needed to address it ASAP.

He looked amazing because he was the first person to alert the rest of the business.

What do we think? Are services like those offered by Crisp Thinking the future of social media management?

Mediabistro Course

Mediabistro Job Fair

Mediabistro Job FairLand your next big gig! Join us on January 27 at the Altman Building in New York City for an incredible opportunity to meet with hiring managers from the top New York media companies, network with other professionals and industry leaders, and land your next job. Register now!