I was having a thought-provoking chat with a friend the other day. As happens these days, the chat was over Facebook Messenger, not face to face. We were discussing things like the PageUp security breach, and I mentioned that one of the companies who had been involved in the security and data compromise, had sent out an email to potentially affected users noting that – if they had submitted a job application including references, they should not only check their own security, but also tell any of their referees that their personal contact details are now compromised.
It’s breach by association, at which point my friend compared it to having an STD/STI (Sexually Transmitted Disease/Infection). And you know what – he’s completely right. You can imagine social circles of the future, judging each other by how often and glibly they ‘sign up’ to another website. Or parents of the future sitting down their children and telling them not to give up their data to any old sign in. Yeah, it’s a weird tech fever dream, but there’s an element of truth.
The reality is – how many sign ins do you use every day? How many system sign ins do you access, without even thinking. I’m writing this at 11.30am, and have already entered passwords to about 6 systems, and been authenticated (ie the password was saved in my browser) for at least 6 others. That I’ve counted on my desktop alone.
And each one you do is a potential security vulnerability and the creation of a web of contacts outside our control. Of each of those sign ins, how many ‘connections’ does each have? How many places have you used that oh-so-easy “Sign in with Facebook” button? And how many places have I used personally identifiable information (PII) and personally identifiable data (PID) in a browsing session. I mentally keep coming back to the truism I heard a few years ago – that over 90% of the population could be individually identified by 3 or 4 regular locations they visit during a given week – school address, home address, work address, local supermarket.
Most people could probably be identified within a few minutes by not only location, but what sites they are looking at in that browsing session as a reflection of their personality – this morning I’ve browsed tech stories and gone online shopping for NBA basketball gear. I dare say the likelihood of people who live on my street with similar browsing habits, is pretty slim.
Especially people who are signing-in around as much as me.
It leads back to my previous thoughts about how we treat data in the future. At essence, data is fast becoming almost an allegorical human being. A narcissistic mirror-clone of ourselves, complete with its own network of connections, preferences and personality that may (or may not) reflect our own. We’ve all seen the work that companies like Cambridge Analytica did to try and use that mirror of our preferences for political and financial gain. And there’s already a whole industry (data brokerage) built around the storage, sharing and importantly – increasing that connectivity of our data personalities makes.
Think of how quickly online systems start targeting advertising even after a single, supposedly innocuous Google search. It’s almost immediate that the next website visited starts serving ads for that quick search for handcuffs and whips.
What this means, simplistically – is that these companies are helping our data personalities grow and mature into ever more complex clones of ourselves. In more than one way, data brokers, data warehouses and companies like Google are farming our data personalities – both owning for profit, and also helping it to grow.
The concept of data warehousing is becoming more widespread in terms of interest and the evolution of legislation. The state of Vermont passed a bill in May 2018, detailing regulation on data brokerage.
This seeks to enforce transparency and allow greater consumer control (eg. opt-in) into the data broker market – companies who have built a business out of warehousing and onselling consumer data, collected from a very wide variety of sources, and ‘normalised’ (ie easily correlated and compared). Since data use is the modern equivalent of the gold rush, this is obviously a good business, with very many potential customers.
The Bill is then detailed to protect a consumer’s right to opt out, retain transparency over how the data has been used, and so on. This has 2 potential main points of interest from here on.
So that’s all well and good, but how will this evolve as our data personalities grow beyond our knowledge? What happens when the amount of subjective consent becomes too big too manage (ie…already!) – as in, I want to use Facebook, but don’t opt in to US based advertising, or EU data protections, but still want localised advertising. And those preferences are different on Instagram.
Who, how and where is the control, ownership and storage of this happening? And why? Let’s accept the legitimacy of licensed data brokers (and yes, to be clear, I do think data brokers are probably the best option for this). Then, what happens in a breach scenario when our data personalities are abused, enslaved or become violent.
Before we look at that, let’s look at each of those concepts.
Slavery and kidnapping
How could our data become a slave or kidnapped? Well, simply put, if our data is being used without consent, for the benefit of others – the best example would be the theft of our details and the use in online criminal activit (a security breach and theft of identity).
There are laws already in effect for this, of course (fraud etc), but I’m suggesting that this could theoretically extend by thinking of a security breach and use of an identity in the same vein as kidnapping, rather than theft and fraud. Using kidnapping and enslavement concepts starts to carry more weight as our data personalities become more solidified mirror clones of ourselves.
The rehabilitation that a victim has to go through to check and repair their online identity – their data personality, which has been ‘marked’ by a criminal datapoint, is one example of how much more complex the data personality is becoming. As security breaches become more prevalent, and as data personalisation is becoming even more widespread and embedded in our activity, it will become harder to ‘repair’ these datapoints so that a data personality becomes a whole being once more, let alone identifying what this meant to begin with.
Violence and harrassment
In March 2018, it was announced that the NSW state government was reviewing defamation laws specifically to update laws due to increasing litigation involving social media. There are similar moves in the UK legal system, with the harrassment and the laws around ‘upskirt’ photos taking recent prominence.
This affects brands and companies as well – which will depend a bit on your opinion of whether a company should be given the same rights as a human. Earlier this year, the Hells Angels chapter in Manitoba, Canada, wanted to ruin a local business who had offended them. Not through the traditional violence we associate with bikers, but through social media bullying.
As we see an increase in social media baiting, SWATTING, trolling, and other aggressive online tendencies, we need to acknowledge and realise that these are also creating a new type of violent data personality that may, or may not, be the same person that exists offline. Not that it matters – the result is that these are not only exhibiting damaging behaviour, but also creating extremely complex data personalities that are connecting victims and perpetrators in new way (which we see in the disturbing cases of online bullying).
EDIT: This piece was published on the New York Times. It absolutely encapsulates the future of data personality abuse.
She said she did not know how all of the technology worked or exactly how to remove her husband from the accounts. But she said she dreamed about retaking the technology soon.
By which I mean to understand and define the ways that a data personality can be abused, and how this differs and impacts our offline personality.
What are the ways a data personality can be abused? By being used in a method such as the above? By being connected to data and networks that are not of the same values as the personality (eg forced connection) – eg Cambridge Analytica, or by being exposed to unwanted personalised advertising without consent? If I’m a staunch Republican but I’m now being pummelled by Democrat posts on social, that I cannot control, then I’m going to start feeling hard done by. Abused – like my data personality is no longer in control.
The magic word is consent. Not only do we need to control consent, but our data personality needs to be able to knowingly consent. The more reliant we are on convenience, the less knowledgeable we become on what we’ve connected to.
Obviously that’s ‘simple’ enough, by introducing consent management for the general user. As complex as such as system would be, it would be a step in the right direction. These systems are already being explored world wide by many organisations from commercial to government, non-profit and nefarious (as soon as a consent mechanism is established, so are the mechanisms to disrupt and destroy).
Considering that the AFP page referring to identity theft notes that identity fabrication is technically illegal, then it suggests that definitions of identity a couple of decades behind the interweb. It then notes that you can get a certificate as a victim of identity theft, usable in future to prove your legitimacy. Bad note, the link on that page doesn’t work…good note, here is the correct one.
In other words, the system is way behind the 8-Ball. Right now, we don’t have a way of identifying when, and how much our data personalities have deviated from the original mirror clone of ourselves. There’s certainly no simple version we can go to (ah, if only I could go back to Chai v11.56, before I clicked on that bloody link).
How can we repair/rehabilitate a personality after an incident. If my data personality has deviated beyond my consent, without my control, how can I get it back? Are those connections permanent?
And at a metaphysical level – will I recognise my data personality in the mirror?
To go back to the STD/STI metaphor, once infected, do I have Sign In for life?
And I’m talking about both the offline and online personalities. As the data personality gets more individualised, I can see it getting harder to ‘start again’ with a new data personality. Besides the obvious changing of identity and how this impacts legitimate PII, my new identity would be, in a sense, already vulnerable and connected to previous breaches. To use my previous example, there is no difference between chai.lim and charlie.lam if both are reading trending tech stories and shopping for NBA gear on Firefox at 11.30am in the same (or as close as) geographic location.
This delves into concepts of soverignty, regulation, let alone consent. It delves into our notions of humanity and personality as data constructs.
How do we treat data personalities? Clearly they’re not ‘human’ in our traditional sense of the word – they’re definitely not sentient. Yet they are growing, connecting, and need to be tended to. Plants, perhaps? Weeds, the way they are growing beyond control?
If we start accepting and looking at the importance of protecting these data personalities, and the ability and importance for a data personality to be involved in consent, then we could even look at them in the same way as pets. I’m hesitant to call them children, but considering how much they are mirror-clones of us, perhaps they are like children in some way. Okay, they aren’t self aware. That’s clear.
What are they? I don’t have an answer, or short catchy phrase. It will be something to re-visit over the next few years. Beyond definition, what can we do with them, and draw the most important connection – how do they reflect or deviate from the actual human?