There’s a lot being made out of Apple’s decision to use differential privacy in iOS10.
WIRED as usual have a solid write up.
In Craig Federighi‘s part of the WWDC Keynote address, he mentioned that Apple was shifting to use differential privacy as a means to collect even more data but protect user privacy at the same time. In brief, differential privacy allows a company to collect huge amounts of data while adding just enough ‘noise’ (slight variations of the collected data) to find balance between making the data collection points (the users) identifiable, and accurate. This was developed a bit later in the week
One of the main counterpoints against differential privacy (after everyone did a quick Google about it) is that it has been, to date, theoretical and that Apple has jumped to mass deployment without any field testing etc. This tweet from Johns Hopkins cryptography professor Matthew Green has made the rounds rapidly since then.
Since no one understands differential privacy at a consumer level, we aren’t nearly qualified enough to make a judged call on it. I have seen his words quoted verbatim on a lot of sites, for better or worse, as he is both an expert, and the most visible voice of authority outside Apple.
Hell, I barely understand how a zipper works (hint: it’s black magic), and here I am blogging about it.
Let’s deal with a few of the main points:
The argument that Apple are jumping from theory to mass deployment. Absolutely, from a strict maths point of view, this is horrible practice with real implications. But, the beta version of iOS10 IS the test of the technology. It isn’t going straight to live in September/October. The developer site already has a starting point for documentation, and it’s going to get ‘peer reviewed’ by developers.
The initial release of it in beta is much more limited than the gee-up WWDC Keynote would imply. In fact, it’s almost disappointing seeing it limited to the Search API for now and with suggestion that it would be useful for crowdsourcing data about deep linking. It all seems very mild…and not at all ‘widespread deployment’. Great.
Welcome to field testing in a new generation of ‘Agile’ development. This is differential privacy as a Minimum Viable Product, tested on one of the biggest, most public, global pools of data variables. If it fails, it should theoretically take a point release to remove the systems from iOS10. Or ideally a new beta point release so it never sees public use.
Of course, if it does fail, and we see masses of user data exposed, we’ve got a huge problem. That’s an unknown, with crazy implications, especially given the work and obsession that Apple has put into privacy lately.
Hence, the absolute reliance on the beta program to work. And why they have 3 months to get it working for a public release (based on their usual September/October timeline of releasing new devices and public iOS update) and are starting deliberately small. If anything, they could delay release of differential privacy within iOS10 in very small sprint release, which happens to work in with their existing methodology of .x.x OS updates. This is what happened with Apple News, which was announced for USA, UK and Australian regions prior to iOS9, but only released to the AU market after the others. Apple’s famed walled gardens work to its advantage in this case – by controlling the development environment allows them to understand and manage the success and failures of the program.
Then the question mark moves to how Apple will approach the personal variations on how a user chooses to share their data across apps. How will the ecosystem adapt and share if I have a full access login on my eCommerce app (eg Amazon) that knows all my details, but choose anonymity on a health app that the eComm site then partners with? How accurate is my data being shared? How accurate do I want it to be?
The Apple Services Business
From an Apple business perspective, imagine the success if Apple manage to make this a successful business. I’ve already discussed Apple’s shifting business model to become a service oriented business, with a major reliance on a successful implementation of user friendly encryption. i said it then:
Could we see a new future for Apple here? Where they once made ‘mobile’ commonplace, could they do the same with a service driven future. Heck, could they make ‘infosec’ (information security) as essential to our lives as sharing.
and I really see that happening. I had the feeling something solid was building for WWDC 2016, and could this be it?
If they manage to prove (at global scale) the success of differential privacy, this could become a fundamental shift in the offering that Apple provide in a new line between them and discussions about government/private controls over end user privacy data.
This doesn’t really solve the question of sovereignty and privacy ownership – and potentially makes it more cloudy, given that differential privacy is a new field of work. If a user can be identified, can they be accurately tied to a specific behaviour or is the noise that gives differential privacy its power, also make it harder to identify an actual individual? This will be interesting to see in practice.
Neither is this, to be honest, a new problem – given the difficulty we have with dealing with the concept and nature of Big Data. It is also likely not one that Apple will have to face till 2017, which is when i predict that a deeper implementation of differential privacy will be made public. If they follow the previous releases, especially iOS9, the real watershed moment wasn’t the gold 9.0 release, but 9.3 in early 2016.
Does differential privacy make me more money?
The other interesting perspective is that it calls into discussion – how much user privacy and behavioural data do we actually need to operate a business.
I’m going to approach this from a partner perspective – if I were a major company looking to target my new product campaign at as many user on the iOS ecosystem as possible. I would argue that it isn’t essential if I don’t have definitive user data, or if the variance of accuracy in my target number goes from 5% to 6 or 7% (still a number in the 100s of thousands, if not millions) as long as I hit my base target, and especially if it helps me either hit or increase my main conversion targets that make me revenue for the marketing campaign. Behaviour, by its nature, is a gray cloud with innumerable natural variations in what causes the single user ZMOT.
I love the Zero Moment of Truth. Reminder: write about it…
In this case, differential privacy has incredible potential. If it can provide more easily gathered data that satisfies the business ROI, but uses the variance to keep users happy with the increasing fascination with secure privacy.
is this the transformative moment for Apple in becoming a service driven business model that evolves away from hardware fascination and into consumer services?
It’s, at least, a major major step.
2 thoughts on “How noise can make privacy more…private…the differential way”