New Facebook detection takes A.I to another level.
With social media more influential than ever, the technology is here to stay and it looks like the part it plays in our life is only going to get bigger.
Facebook has developed a technology called “Proactive Detection” which can detect if the user is suicidal by analyzing the content they post and read.
The recognition technology will decide or not they think a person’s content is ‘troubling’ and even send the cops round to your house if they think you are at risk of doing harm to yourself or others.
Of course, this technology sounds like a good idea in theory, but is raises all kinds of human rights problems. Every human should have the right to their one free will, the fact that Facebook feel like they can wade in on such huge issues shows how powerful the company believe they are, and indeed they do have the power to go through with ‘big brother’ style programs like this.
Techcrunch has reported:
“Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info”
The internet and indeed social media is a whole world of contradictions, much of what it posted on social media is contained to the internet ‘bubble’ and most people feel comfortable posting things online that they would never say in real life, this means they could also threaten actions that they would never go through with.
Facebook involving the emergency services in this program could be a potential disaster, with they system at risk of being abused.
But it has already started, Newsroom have reported that over 100 call outs have already been made to emergency serviced.
“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community.”