Last Friday, it was revealed that Facebook had fired its “Trending Editors” team, who looked after its trending news feed (kinda obvious from the name?), and had only recently been in the news under accusations of bias from the Conservatives in the US. It also comes fairly soon after Facebook rolled out its sections and topics approach.
The editors were replaced by and algorithm (whose results were apparently vetoed/published by a new human team)
Within a few days, this system failed terribly (link goes to The Atlantic, and I’m saying that because it’s The Atlantic, who are a fantastic site. The Atlantic), with a completely fake news story appearing in the feed, and only removed some hours after the web exploded.
The article wasn’t the usual fake celebrity gossip, but stated that the highly respected Fox News anchor Megyn Kelly had been fired for being a ‘traitor’ to the Conservative cause by endorsing Hillary Clinton. I’m not going into the details of the article or what was wrong about it.
Far more relevant is – how can a corporation be held accountable for the actions of its machines. It’s a legal and conceptual struggle that is evolving from a pebble to the landslide in 2016.
The pebble has been rolling along for a little while, gaining momentum with key pushes from people like Elon Musk
About a year ago a robotic arm at a Volkswagen factory crushed a man, and there have been a few incidents since, with a lot of discussion about the liability if robots and autonomous products kill people. I’ve seen the trolley problem discussed more times recently than my entire life before!
The most prominent recently have been the accidents involving Tesla vehicles while they were operated by their Autopilot.
Satya Nadella, CEO of Microsoft, and someone I’m rapidly developing a bromance for (seriously, he’s rebuilding a real powerhouse over there at Microsoft) has weighed into it with his own rules/principles/goals for AI, as well as some cultural principles for humanity, that reflect our changing tech centric society. Google’s engineers have displayed intent to get focussed on it too, especially in the fallout after their self driven car got into an at-fault accident early this year.
What does this lead to? Who is at fault when a machine commits the same mistakes we do. There’s the ones that cause actual damage and harm, such as car crashes.
And then there’s the ones that cause emotional distress, or defame a person, such as the Megyn Kelly case.
The key to these is both the practical maturity of the system (and its need to keep on learning), and the emotional maturity.
Practical maturity is a technically easier. There’s a point at which a system has enough data to process and compare all theoretical options with real world indicators. In theory of course – as they say the system gets better the more you throw at it. The same works with people – ideally we keep on learning and improving as we age. But there’s a trust moment where we are counted as an adult, and are able to drive and operate maturely and independently. One supposes that AI will have to undergo some similar tests for maturity.
Emotional maturity calls into question all sorts of concepts around intent, and the idea of nature/nurture – an AI doesn’t naturally care about Megyn Kelly’s reputation. It cares about accurate data matching, and popularity for users. Should the AI care about other people’s reputation? How? Why? That kind of concept is based on our nurture. This calls into question the programmer as parent. And if the programmer is parent, who the hell is the corporation in this weird family?
The next question is of course how this is judged and licensed? What happens if there’s a team of programmers. Ethics and empathy are raised as key milestones for AI by both Satya Nadella and the Google team (who were more practical in detailing the need for safety and oversight).
How does an AI learn the limits of empathy. Microsoft may have come closest to that, with their work on Xiaoice and Tay, two extremes of social acceptability in AI who demonstrated remarkable ways to empathise or abuse people socially.
It’s incredible work, going on. And all in the real world too. Who taught the computer to feel love, or to hate, or…well, to just be an asshole.
I think I’m done for the night. My brain can only take so much HAL.
One thought on “Facebook’s algorithm and the AI rabbit hole (or, as The Atlantic put it “can a bot commit libel”)”