In this post from our ‘Privacy in focus’ blog series we discuss notice and consent — key cornerstones of privacy regulation both in Australia and around the globe — and key challenges in how these concepts operate to protect privacy.
From the 22 questions on notice, consent, and use and disclosure in the Privacy Act issues paper, there is one underlying question: Who should bear responsibility for safeguarding individuals’ privacy?
This is the fundamental question because in order to reasonably address consent, we need to address the fact that people don’t engage with privacy notices or privacy policies. We must ask — should people engage with privacy notices and policies at all, and what would happen if we didn’t expect it? What could we change in information collection and disclosure that would improve real consent?
In this and the next edition of our ‘Privacy in focus’ series, we’ll be digging into some of the challenges — and potential solutions — for notice and consent as cornerstones of privacy regulation.
The gold standard
Information privacy is often defined in terms of individual control — the ability to determine for yourself when others may collect and how they may use your information.
This way of thinking is deeply embedded within privacy laws across the globe, which emphasise transparency, consent and individual rights, apparently in the belief that individuals have the resources, capacity and inclination to manage their data across the myriad sites and services they engage with each day.
Our focus on privacy self-management is also reflected in the way we think and talk about breaches. Because fully-informed consent can justify any use or disclosure of personal information, regulatory actions are often framed in terms of failures of transparency or consent. For example:
- The OAIC’s proceedings against Facebook about the ‘This is Your Digital Life’ App (aka the Cambridge Analytica affair) turn in part on Facebook’s failure to adequately inform affected Australian individuals of the manner in which their personal information would be disclosed.
- The ACCC’s two ongoing proceedings against Google both turn on allegations that Google misled customers and failed to gain effective, informed consent.
The invisible hand
The focus on individual empowerment reflects a neo-liberal commitment to market forces. This argument says that individuals know their own circumstances and preferences best, and in aggregate, individual choices will generate market pressures that push companies towards better privacy protections (if that’s what people want).
This is the logic behind many of the ACCC’s recommendations from the Digital Platforms Inquiry: that if we put the consumer back in control and empower them to make informed choices (e.g. through more robust notification and consent requirements) then we’ll be most of the way there.
It’s an appealing argument, and there is evidence that bears it out at least in some cases. We see technology companies — particularly Apple — increasingly competing on trust and privacy, and a huge proportion of Australian consumers (88% per the OAIC’s Community Attitudes to Privacy Survey) report that they’ve chosen not to deal with an organisation because of privacy concerns.
How rational are we really?
But there is also ample evidence to show that relying on consumer choice to shape the market, or even to protect individuals’ own interests, is rarely effective.
For starters, most of us don’t read privacy policies. The OAIC’s Australian Community Attitudes to Privacy 2020 survey found that just 1 in 5 Australians (20%) both read privacy policies and are confident that they understand them. However, we suspect that these self-reported numbers are exaggerated. Empirical studies show significantly lower proportions reading privacy policies and license agreements in detail (in one study, less than 1 in 1000). Improving and simplifying privacy communications might go some way to address this, but if you consider that the average Australian might have 100 apps on their phone, and visit up to 1500 websites every year — ain’t nobody got time for that.
Secondly, making rational decisions about nebulous future informational harms is not easy. Most of us have little or no understanding of the digital ecosystems through which our data moves, or the ways in which it might be used or disclosed in the future and how these might affect our interests.
Thirdly, even if we did have the time, inclination and expert knowledge required, we’d still make bad choices, because we’re human. There are a range of psychological and cognitive biases that get in the way of rational decision making, even for informed consumers. Things like availability bias (the tendency to make judgements about the likelihood of an event based on how easily examples come to mind), or a tendency to value immediate gratification or convenience over future benefits. For example, 50% of people surveyed by Deloitte stated that they had given consent (when they had previously refused) because they were tired of being asked continuously by the same service. Are we content to allow people to trade away privacy, regardless of whether they feel informed at the time, but particularly if in error or a sense of resignation?
Finally, our weaknesses are being actively exploited. How many times have you seen a big inviting green ‘I accept’ button next to a tiny, grey ‘no thanks’? Ever compared how many clicks or taps it takes to turn tracking off with how many it takes to turn it back on? Some digital platforms and services take advantage of psychological and cognitive biases to lead users to make privacy-intrusive selections. This 2018 report from the Consumer Council of Norway presents a detailed explanation of some of the ways Google, Facebook and Microsoft use default settings, ‘dark patterns’ and features of interface design to nudge users towards privacy intrusive options.
Are we lost?
Yes and no.
Notice and consent will always be a critical piece of the privacy puzzle, and there are plenty of things we can and should do to incrementally improve the current system. To name just a few, we can improve the quality of notices, require standard form disclosures like Apple’s nutrition labels for apps, prohibit deceptive design, require privacy protective defaults, or establish higher standards for valid consent.
But user choice and market forces can never be the full solution. We need to get serious about taking responsibility for managing privacy harms away from individuals.
Just how we do that is a matter for debate. Borrowing from consumer law and product safety, this might mean a greater focus organisational accountability to ensure the fairness and reasonableness of data practices. Borrowing from international human rights law and the GDPR, this might mean adopting tests of necessity and proportionality when balancing interference with privacy against legitimate competing interests.
But we’ll get into all that in our next edition.