Is it app-ropriate to require personal information for parking?

elevenM’s Tessa Loftus on the experience of technology solutions that are actually privacy intrusions in our everyday life.

Last week, I needed to take my daughter for an early morning urgent medical appointment in Rhodes. My initial delight at easily finding a park quickly turned to consternation when it seemed that the only option to pay for my parking — on a public street — was to download an app.[1]

As anyone who’s ever needed to get to a specialist appointment on time (i.e., everybody) would know, you don’t have the luxury of being late. So, my options seemed to be: download the app without reading the privacy policy, do not download the app and risk a parking fine or find alternative parking.

Despite being a privacy professional, I did as most people would do and downloaded the app, which immediately asked to access my motion and fitness activity (why?), and to send me push notifications. For the app to work, I was required to provide my full name, email, phone number, credit card details and access to my real time location data. This gave the options of once only, only while using the app, or, again oddly, always.

Even if I’d had time to read the privacy policy (which I didn’t in view of our looming appointment), there was no privacy policy linked in the app store, and the page entitled ‘App Privacy’ was blank.

As we noted in our recent blog on the consent catch-22, “Information privacy is often defined in terms of individual control — the ability to determine for yourself when others may collect and how they may use your information.” But moving basic services into privately-operated technological solutions and making them ‘accept or don’t use’ undermines the basic notion of consent. If my options are ‘not parking in this suburb’ or providing my name, email, phone, credit card and real time location to an organisation that doesn’t provide a privacy policy in its app, that is not a real choice, nor is it genuine consent.

Further, where personal information must be provided to use public facilities or to access government services, there is no possibility of a valid ‘consent’ to data processing. I should not have to give up my information to sit on a public bench or park in a public space.

Needless to say, I deleted the app when I left my park. But how do I divorce myself entirely from this app? While deleting the app stops it accessing my location data, it is unlikely that it deletes my data from the database. So now I have to trust in perpetuity that the app developer is protecting my full name, email, phone number, credit card details and location data.

There are simply too many situations where unnecessary collection of information has been slipped into everyday life without people noticing. It is easy to see why a local council and frequent parkers would value the convenience of an app like this, which offers remote extensions of time and linking to a credit card for repeat payments. But what if I don’t want to share my profile with a company I don’t know (or haven’t had time to investigate), or I just want to remain anonymous? What if I am a person who is only thinking about getting where I’m going, and not about digital risk while I’m parking my car, which causes me to make a decision that later causes me harm?

We should all know by now that with innovation and digital convenience come new risks. And it should not be incumbent on consumers to navigate those new risks (especially when they’re under pressure), but rather to be able to trust the system knowing that the rules of participation for data collectors require that people and our social values are protected.

As organisations – both business and government – increasingly look to technology for solutions to the ‘everyday’ we need to ensure that they meet baseline protections. I feel entirely comfortable in buying the cheapest available car seat for my child, because I know that Australia has strong product safety laws and that someone with more expertise than myself has checked that we will be kept safe.

If I must download an app to park my car, the starting assumptions should include data minimisation, strict use limitation and high standards of security. It should not be used as an opportunity to track and monitor me under the fictional guise of consent. I should be able to feel confident that, even if I do not understand the privacy policy, someone who does has ensured that my welfare is protected.

[1]The Canada Bay council website indicates that app-area parking also offers regular parking meters. However this option wasn’t conspicuous to me – the parking sign said ‘phone ticket’, it was underneath a larger sign saying ‘app-name parking area’, and no parking meter was obvious in the vicinity.

 

Photo by Anne Nygård on Unsplash

 

Privacy in focus: A pub test for privacy

In this instalment of our ‘Privacy in focus’ blog series, we look beyond consent and explore other ideas that could make privacy easier and more manageable.

In our last post, without mentioning milkshakes, we talked about how central consent has become in the regulation of privacy and how putting so much weight on individuals’ choices can be problematic.

This time, we’re into solution mode. How can we make privacy choices easier? How might we start moving away from consent as the touchstone? What might privacy law look like if it didn’t rely so heavily on individuals to monitor and control how their information is used?

Start where you are

It is likely that notice and consent will always be a critical piece of the privacy puzzle, so before we start talking about evolving our entire regulatory model, we should probably cover what might be done to improve our current approach.

Last time, we identified four related ways in which individual privacy choices get compromised:

  • we don’t have enough time
  • we don’t have enough expertise
  • we behave irrationally
  • we are manipulated by system designers.

So what can we do to address these shortcomings?

We can raise the bar — rule out reliance on consent in circumstances where individuals are rushed, do not understand or have not truly considered their options, or in truth do not have any options at all.[1] Raising the bar would rule out a range of practices that are currently commonplace, such as seeking consent for broad and undefined future uses, or where there is a substantial imbalance of power between the parties (such as in an employment relationship). It would also rule out what is currently a very common practice of digital platforms — requiring consent to unrelated secondary uses of personal information (such as profiling, advertising and sale) as a condition of service or access to the platform.

We can demand better designclearer, shorter and more intelligible privacy communications, perhaps even using standardised language and icons. Apple’s recently adopted privacy ‘nutrition labels’ for iPhone apps are a great example of what this can look like in practice, but we needn’t stop there — there is a whole field of study in legal information design and established practices and regulatory requirements from other industries (such as financial services product disclosures) which could be drawn on.

We can ban specific bad practicesmanipulative and exploitative behaviours that should be prohibited. Australian Consumer Law goes some way to doing this already, for example by prohibiting misleading and deceptive conduct (as the ACCC’s recent victory against Google will attest). But we could go further, for example by following California in specifically prohibiting ‘dark patterns’ — language, visual design, unnecessary steps or other features intended to push users into agreeing to something they wouldn’t otherwise agree to. Another, related option is to require privacy protective default settings to prevent firms from leveraging the default effect to push users towards disclosing more than they would like.

Who should take responsibility for safety?

But even if we did all of the above (and we should), taking responsibility for our own privacy in a world that is built to track our every move is still an impossibly big ask. Instead of expecting individuals to make the right choices to protect themselves from harmful data practices, the Privacy Act should do more to keep people safe and ensure organisations do the right thing.

What would that look like in practice? Focusing on organisational accountability and harm prevention would mean treating privacy a bit more like product safety, or the safety of our built environment. In these contexts, regulatory design is less about how to enable consumer choice and more about identifying who is best equipped to take responsibility for the safety of a thing and how best to motivate that party to do so.

Without strict safety requirements on their products, manufacturers and builders may be incentivised to cut corners or take unnecessary risks. But for the reasons we’ve already discussed, it doesn’t make sense to look to consumers to establish or enforce these kinds of requirements.

Take the safety of a children’s toy, for example. What is more likely to yield the optimal outcome – having experts establish standards for safety and quality (eg: for non-toxic paints and plastics, part size, etc) which manufacturers must meet to access the Australian market, and against which products will be tested by a well-resourced regulator? Or leaving it to the market and having every individual, time-poor consumer assess safety for themselves at the time of purchase, based on their limited knowledge of the product’s inner workings?

Whenever we can identify practices that are dangerous or harmful, it is far more effective and efficient to centralise responsibility in the producer and establish strong, well-funded regulators to set and check safety standards. We don’t expect individual consumers to check for themselves whether the products they buy are safe to use, or whether a building is safe to enter.

Why should privacy be any different?

Just like with buildings or physical goods, we should be able to take a certain level of safety for granted with respect to our privacy. Where a collection, use or disclosure of personal information is clearly and universally harmful, the Privacy Act should prohibit it. It should not fall to the user to identify and avoid or mitigate that harm.

Privacy laws in other jurisdictions do this. Canada, for example, requires any handling of personal information to be ‘for purposes that a reasonable person would consider appropriate in the circumstances. In Europe under the GDPR, personal data must be processed ‘fairly’. Both requirements have the effect of prohibiting or restricting the most harmful uses of personal information.

However, under our current Privacy Act, we have no such protection. There’s nothing in the Privacy Act that would stop, for example, an organisation publishing personal information, including addresses and photos, to facilitate stalking and targeting of individuals (provided they collected the information for that purpose). Similarly, there’s nothing in the Privacy Act that would stop an organisation using personal information to identify and target vulnerable individuals with exploitative content (such as gambling advertising).[2] The APPs do surprisingly little to prohibit unfair or unreasonable use and disclosure of personal information, even where it does not meet community expectations or may cause harm to individuals.

A pub test for privacy

It is past time that changed. We need a pub test for privacy. Or more formally, an overarching requirement that any collection, use or disclosure of personal information must be fair and reasonable in all the circumstances.

For organisations, the burden of this new requirement would be limited. Fairness and reasonableness are well established legal standards, and the kind of analysis required — taking into account the broader circumstances surrounding a practice such as community expectations and any potential for harm — is already routinely conducted in the course of a Privacy Impact Assessment (a standard process used in many organisations to identity and minimise the privacy impacts of projects). Fairness and reasonableness present a low bar, which the vast majority of businesses and business practices clear easily.

But for individuals, stronger baseline protections present real and substantial benefits. A pub test would rule out the most exploitative data practices and provide a basis for trust by shifting some responsibility for avoiding harm onto organisations. This lowers the level of vigilance required to protect against everyday privacy harms — so I don’t need to read a privacy policy to check whether my flashlight app will collect and share my location information, for example. It also helps to build trust in privacy protections themselves by bringing the law closer into line with community expectations — if an act or practice feels wrong, there’s a better chance that it will be.

The ultimate goal

The goal here — of both consent reforms and a pub test — is make privacy easier for everyone. To create a world where individuals don’t need to read the privacy policy or understand how cookies work or navigate complex settings and disclosures just to avoid being tracked. Where we can simply trust that the organisations we’re dealing with aren’t doing anything crazy with our data, just as we can trust that the builders of a skyscraper aren’t doing anything crazy with the foundations. And to create a world where this clearer and globally consistent set of expectations also makes life easier for organisations.

These changes are not revolutionary, and they might not get us to that world immediately, but they are an important step along the path, and similar measures have been effective in driving better practices in other jurisdictions.

The review of the Privacy Act is not only an opportunity to bring us back in line with international best practice, but also an opportunity to make privacy easier and more manageable for us all.


Read all posts from the Privacy in focus series:
Privacy in focus: A new beginning
Privacy in focus: Who’s in the room?
Privacy in focus: What’s in a word?
Privacy in focus: The consent catch-22
Privacy in focus: A pub test for privacy

 


[1] In it’s submission on the issues paper, the OAIC recommends amending the definition of consent to require ‘a clear affirmative act that is freely given, specific, current, unambiguous and informed’.

[2] These examples are drawn from the OAIC’s submission to the Privacy Act Review Issues Paper – see pages 84-88.

Privacy in focus: The consent catch-22

In this post from our ‘Privacy in focus’ blog series we discuss notice and consent — key cornerstones of privacy regulation both in Australia and around the globe — and key challenges in how these concepts operate to protect privacy.

From the 22 questions on notice, consent, and use and disclosure in the Privacy Act issues paper, there is one underlying question: Who should bear responsibility for safeguarding individuals’ privacy?

When your milk comes with a free iris scan

elevenM’s Melanie Marks’ regular trip to the supermarket brings her face-to-face with emerging privacy issues.

A couple of weeks ago, as I was nonchalantly scanning my groceries, I looked up and was shocked to see a masked face staring back at me. 

After I realised it was my own face, fright turned to relief and then dismay as it hit me that the supermarkets had – without consultation, and with limited transparency – taken away my freedom to be an anonymous shopper buying milk on a Sunday.

Just days later, the press outed Coles for its introduction of cameras at self-service checkouts. Coles justified its roll-out on the basis that previous efforts to deter theft, such as signs that display images of CCTV cameras, threats to prosecute offenders, bag checks, checkout weighing plates and electronic security gates have not been effective and the next frontier is a very close-up video selfie to enjoy as you scan your goodies.

Smart Company reported on the introduction of self-surveillance tech last year, explaining the psychology of surveillance as a deterrent against theft. How much a person steals comes down to their own “deviance threshold” — the point at which they can no longer justify their behaviour alongside a self-perception as a good person.

The supermarkets’ strategy of self-surveillance provides a reminder that we are being watched, which supposedly evokes self-reflection and self-regulation.

This all sounds reason enough. Who can argue with the notion that theft is bad, and we must act to prevent it? We might also recognise the supermarkets’ business process excellence in extending self-service to policing.

Coles argues that they provide notice of the surveillance via large posters and signs at the front of stores. They say that the cameras are not recording, and they claim that the collection of this footage (what collection – if no record is being made?) is within the bounds of its privacy policy (last updated November 2018).

At the time of writing this blog, the Coles privacy policy makes no mention of video surveillance or the capturing of images, though it does cover its use of personal information for “investigative, fraud, and loss prevention” activities.

Woolworths has also attracted criticism over its use of the same software, which it began trialling last year. Recent backlash came after Twitter user @sallyrugg called on the supermarket to please explain any connection between the cameras, credit card data and facial recognition technology it employs. Like Coles, Woolies says no recording takes place at the self-serve registers and that the recent addition it has made to its privacy policy regarding its use of cameras pertains only to the use of standard CCTV in stores.

So it would appear the supermarkets have addressed the concerns. No recordings, no data matching, covered by privacy policy. And my personal favourite: choice: “If you do not wish to be a part of the trial, you are welcome to use the staffed checkouts.

But these responses are not sufficient. Firstly, there is no real choice in relation to the cameras when a staffed checkout is unavailable. Secondly, our notice and consent models are broken, which overstates the actual power granted to consumers by privacy policy. We don’t read them, and even when we do, we have no bargaining power. And lastly, the likelihood of function creep is high. It is not a stretch to imagine that the next step in the trial will be to pilot the recording of images for various purposes, and it could be navigated legally with little constraint.

On a final note, this experience reflects many of the challenges in our current privacy framework including: the balance of consumer interests against commercial interests, the strain on current consent models, and even the desire for a right to be forgotten.

Thankfully, these issues are all being contemplated by the current review of the Privacy Act (read our ongoing blog series on the review here). We need these protections and structures in place, to create a future in which we milk buyers can be free and anonymoos.

Photo by Ali Yahya on Unsplash