Parenting, privacy and the future

elevenM Principal Melanie Marks reflects on the need for better public engagement on the future of privacy, much as is emerging around climate change.

Sitting at a symposium on children’s online privacy a little over a week ago, the footsteps of a thousand marching children ran through my head. Just streets away from the event was the Melbourne base of the national climate change rallies. In the room was an inspiring group of academics, regulators and policy makers, grappling with the future of surveillance and profiling. Both social issues, concerning the future of humanity.

I pondered why the climate rally had delivered so many to the streets now, when we have known about climate change for years? And I concluded that what has changed is that we can see it now, and we are parents. We see the world through our kids’ eyes – strangely short seasons, farmers without water and a disappearing reef are some recent reflections in the Marks household.

Privacy harm is more nebulous. The potential policy issues are hard to solve for and engaging the public even more difficult.

Who could blame consumers for tapping out of this conundrum? According to symposium speaker, Victoria’s Information Commissioner Sven Bluemmel, the model which underpins global privacy frameworks is broken. It rests on the idea that we the consumers are free, informed and empowered to make decisions. But the paradigm is stretched. AI, if done properly, reaches unexpected conclusions. If you can’t explain it, says Bluemmel, you can’t consent to it. If you don’t collect information – you merely infer it – the moment to ask for consent never comes. Add to this, the privacy paradox – consumers say we care, but at crunch time our behaviours bely these words. We are bombarded with notices, consents, terms and conditions, breach notifications, etc. etc. etc. We are fatigued. We would need 65 hours every day to read all this content.

As a result, we consumers are blindly making a trade-off every time we accept terms and conditions we do not read and cannot understand, weighing benefits now (e.g. convenience, incentives, personalisation) against future, undefined harms.

One of the guests at the symposium asks why aren’t the regulators doing more to beat corporations with the available sticks? But this is the wrong question – I mean, it’s a good question, deserving of a response which features cold hard facts, plans and budgets but it is not the right question, right now. The real question is what kind of future we want for ourselves and for our kids.

Do we want to be known in all places, spaces, channels, forums? Do we want decisions to be made by others for us, by stealth? Do we want our mistakes and those of our offspring to be recorded for posterity? Do we want others to film, record and analyse our faces, our moods, our behaviours? Do we want to be served a lifetime of curated content?

The speakers at the symposium present a range of perspectives. Youth ambassadors working with OVIC share a generational view on privacy – they do care about privacy but it’s got a 2019 twist – for a generation who is used to sharing their lives on social media, privacy is controlling what people know about us, how they use it and who they are.

Prof. Neil Selwyn of Monash University Faculty of Education talks about the impacts of datafication and reductionism (defined as: “reducing and turning things into data – remove context and focus on specific qualities only”) in schools: Robots to replace teachers who identify and address isolation in the classroom with 71% accuracy (so low?); school yard surveillance to catch antisocial behaviours; the shift to using this surveillance data for implementing anticipatory control of behaviour rather than the ‘old school’ disciplinary response. Selwyn talks about the ease with which these technologies have rolled into schools. They are a soft touch, a “loss leader” for introducing new technologies to giant sections of our society without the necessary debate, easing the way for subsequent roll out in hospitals and other public spaces.

Another perspective is that of Tel Aviv University’s Sunny Kalev, who talks about the breakdown of boundaries between our key spaces – school, home, physical and virtual. There is now data transfer between these domains for children, basically meaning there is no escape – no place is private. He called on the words of Edward Snowden: “A child born today will grow up with no conception of privacy at all.  They’ll never know what it means to have a private moment to themselves, an unrecorded, analysed thought. That’s a problem because privacy matters. Privacy is what allows us to determine who we are and who we want to be.”

Dr Julia Fossi is more optimistic about what the eSafety Commissioner’s Office has been able to achieve, imagining a future where we can work with big tech to deploy AI-driven technologies to keep us safe, protect the most vulnerable of us, and send help to people in crisis. It’s the utopian vision that presents a counterpoint to my ‘do we want to be known’ questions. It also gives a little dose of optimism – that it is possible to find that middle ground of embracing the best of technology while protecting what matters.

The challenge is building the same kind of consensus around the more nebulous privacy harms as exists around safety, or as is emerging around climate change.

Mr Dutton, we need help with supplier risk

When we speak with heads of cyber, risk and privacy, eventually there comes a point when brows become more furrowed and the conversation turns to suppliers and the risk they pose.

There are a couple of likely triggers. First, APRA’s new CPS 234 regulations require regulated entities to evaluate a supplier’s information security controls. Second, there’s heightened awareness now in the business community that many data breaches suffered by organisations are ultimately a result of the breach of a supplier.

The problem space

Organisations today use hundreds or even thousands of suppliers for a multitude of services. The data shared and access given to deliver those services is increasingly so extensive that it has blurred the boundaries between organisation and supplier. In many cases, the supplier’s risk is the organisation’s risk.

Gaining assurance over the risk posed by a large number of suppliers, without using up every dollar of budget allocated to the cyber team, is an increasingly difficult challenge.


To appreciate the scope of the challenge, we first need to understand the concept of “assurance”, a term not always well understood outside the worlds of risk and assurance. So let’s take a moment to clarify, using DLP (Data Loss Prevention) as an example.

To gain assurance over a control you are required to evaluate the design and operating effectiveness of that control.  APRA’s new information security regulation CPS234 states that regulated entities require both when assessing the information security controls they rely upon to manage their risk, even if that control sits with a supplier. So what would that entail in this example?

  • Design effectiveness would be confirming that the DLP tool covered all information sources and potential exit points for your data. It would involve making sure data is marked and therefore could be monitored by the tool. Evidence of the control working would be kept.
  • Operating effectiveness would be the proof (using the evidence above) that the control has been running for the period of time that it was supposed to.

The unfortunate reality of assurance

In previous roles, members of our team have been part of designing and running market-leading supplier risk services. But these services never actually gave any assurance, unlike audit reports (eg. SOC2, ASAE etc). Supplier risk reports typically include a familiar caveat: “this report is not an audit and does not constitute assurance”.

This is because the supplier risk service that is delivered involves the consulting firm sending a supplier a spreadsheet, which the supplier fills in, prompting the consulting firm to ask for evidence to support the responses.

This process provides little insight as to the design or operating effectiveness of a control. If the worst case happens and a supplier is breached, the organisation will point to the consulting firm, and the consulting firm will point to that statement in the report that said the service they were providing did not constitute assurance.

We need your help, Mr Dutton

The reality is that every organisation getting actual assurance over every control at each of its suppliers is just not a feasible option.

We believe Australia needs a national scheme to manage supplier risk. A scheme in which baseline security controls are properly audited for their design and operating effectiveness, where assurance is gained and results are shared as needed. This would allow organisations to focus their cyber budget and energies on gaining assurance over the specific controls at suppliers that are unique to their service arrangement.

Last week, Home Affairs Minister Peter Dutton issued a discussion paper seeking input into the nation’s 2020 cyber security strategy. This is a great opportunity for industry to put forward the importance of a national and shared approach to managing supplier risk in this country. We will be putting forward this view, and some of the ideas in this post, in our response.

We encourage those of you struggling with supplier risk to do the same. If you would like to contribute to our response, please drop us a line here.