This is a good, and extensive, and well worth, summary of the Cambridge Analytica scandal.
In short, it’s clear that the company Cambridge Analytica was able to use Facebook’s architecture and algorithms to make a mockery of Facebook’s claims of user privacy – accessing, mining, analysing and influencing users in ways not thought possible or permissible, to probably help influence the US Election of 2016.
The fallout is fascinating. Beyond the rather meaningless self-flagellation by Cambridge to suspend CEO Alexander Nix, to Facebook’s own fairly meaningless statements, to Elon Musk removing his company and his own FB accounts in a fit of #deletefacebook, to Zuckerberg’s commentary and self reflective “it’s not whether Facebook should be regulated, but how”.
And this ‘blueprint’ presentation, which goes into some detail of what they did (it’s a sales presentation, so take it with a grain of “would you like to spend your millions of dollars with us”).
These moves are fascinating, but very perfunctory. Zuckerberg made some lightweight apologies and commitments to self-regulate. Which, to me, just feed into the idea that he wants to be President one day. He’s really practicing how to apologise without actually being sorry.
The most interesting to me is the additional colour added to an older story of how the Obama campaign (yes, it’s a Fox News link) was able to use similar data scraping of Facebooks OpenGraph API to get a whole lot of user data…and how Facebook’s representatives are alleged to have been okay with “their side” doing this data scraping.
Generally I’m of the view that companies can and should be allowed to have an opinion. Their views can and should be driven by their executive, shareholders and employees and be subject to the same regulation and openness that an individual is subject to. Since we, as users, are consenting to share our information with a private company, we do also consent (as long as clear in the terms of use) that our information is used and handled by a private company with its owned, behind walls, platforms and technology. At this point in time there are regulations popping up everywhere for how companies should deal with this kind of private data.
That Cambridge Analytica managed to game Facebook’s system confidently and relatively easily is a comment on Facebook’s naivete, and the general consumer naivete over data sovereignty and use. One of the more interesting parts of the discussion is whether this land grab of 50 million users constitutes a data breach. Facebook are desperate of course to paint it as anything but a data breach (for obvious reasons), and it’s true that it only resulted because it is a business model in action. However, isn’t this the same as a software system that allows the word “password” to be used as the password? If someone can access a user’s account because of bad password practices, then the company’s business model has enabled a data breach. I see this as the same scenario. It’s often said that most hacks work because of social engineering, not technical prowess. The same occurs here – the social engineering of users, enabled by a “because we can” architecture that created its own back doors.
It’s easy to take some schadenfreude in the $50billion or so drop in value in Facebook, and the idea the Cambridge Analytica is in trouble for their nefarious activities, but let’s be honest, these are blips in the radar for both these highly driven and very intelligently run companies. Cambridge Analytica will take a hit, have staff turnover and get back to dealing with companies and organisations who give an absolute shit how they managed to game Facebook, because they want to do it too. Everyone’s been wanting to game Facebook, from governments and black market groups who want the user data and analysis, to ad agencies and retailers who want more of the revenue that Facebook hoards, to publishers who want the audiences back that Facebook grabbed. And more.
R.R.R.Regulate?
This leads back to the regulation argument. Should Facebook be regulated? Is it a service and inalienable right. Tough call. Perhaps in future it is. The immediate thing that comes to mind is China’s social credit system – which is already underway and recently hit headlines with the idea that it would restrict travel for those with low social credit. Idealists, privacy freedom fighters recoiled in disgust at the idea that not only was social credit impacting real life in the same way as a criminal record, but that it was being regulated and used in such a way.
And yet…let’s look at the patterns developing. Increasing pushes to regulate technology. Gamed social media for the purpose of influence. More open discussion around when and how data needs to be regulated more than “whether”. So let’s start thinking of data as a inalienable right, in the same way as water or currency. The services that transport and enable this access are already akin to the railway and telephone systems of the previous generations.
So where and how to we determine the value of data that we are generating and consuming? Data is a consumable product, let’s get clear on that – we are generating it, trading it, owning it, selling it, hoarding it. The next thought process is a bit of a step – but let’s view data as a living thing.
What?
Yes, live. It grows, it branches off and creates new offshoots of data. It flourishes if fed well, it has provenance and sovereignty. It can be abused, and it can die.
The next step is one that China is already exploring. It’s a system that rewards those who help data (defined as social credit) grow and be more valuable. It then punishes those who abuse data (social media trolling as an example). There’s a fine line here between whether we are talking about data as the core living thing, or trust as the living thing. In many ways we can use the same concepts for trust as for data – abuse, growth, branches. This is the core conceit of blockchain, for example. However, I prefer to use data as the core definition – primarily as I think that data has a more immediate translation to a consumable concept than trust. I wonder if this article would work if I replaced “data” with “trust” (I’ll have to try that, but back to it first)
Like physical and criminal abuse, the abuse of data is very much a government and justice system definition. I could see many governments shoving the concept of data abuse to one side, while others obsessing about the characteristics and reform of data abusers.
How does this relate back to Facebook – it’s about the regulation, of course. First there’s the idea that Facebook should be regulated – but how to define it? As a social service? Is social media an inalienable right when Facebook’s version of social media is actually a different service from WeChat, Twitter etc etc? It’s not a transport or communications medium like Amazon, or telco. But it is a data service. And it does create and manage social standing and community through that data.
Second, there’s the concept of data abuse – so while it’s debateable whether this was a data breach – which would be a much more powerful concept to throw around than the current slap on the wrist with a wet noodle – if we use the concept of data abuse, then we can start to see Cambridge Analytica, and potentially Facebook, in much more serious circumstances especially in terms of their own social credit.
This kind of social credit regulation will happen around the world, in one way or another. Perhaps 5-10 years away after more breaches or data abuses occur, and we begin to understand more about the nuances of living data. I don’t for a second think that China’s systems thought of it in this manner. They see it as a means for measuring and controlling their living citizens through regulated data which is generally equivalent to the concept of seeing social media as a service. However, the social credit system goes beyond social media. In truth, it’s a system to monitor, manage and reward how well a user treats data.
And that is something I can definitely see getting regulation around the world.