Future Privacy: A tipping point for tech

In celebration of Privacy Awareness Week we’re starting our new blog series “Future Privacy”, in which we’ll seek to understand and resolve some of the challenges many organisations face managing privacy in a time of exponential data growth. In particular, we’ll be looking into the role technology and automation can play.

In this first post of the series, we’ll start by reflecting on how privacy has developed as a discipline in Australia over the last 20 years, through the experiences of our privacy practice lead, Melanie Marks.


Perhaps surprisingly for a privacy professional, my career began in advertising. Working for an agency, I was spruiking credit cards (if the work of an ‘account coordinator’ can be called spruiking) while I finished my law degree. I took a course at ADMA where I learned about privacy in the context of direct marketing practices, including managing the quality of data held by the mailing house.

Privacy was a reasonably new concept for businesses and there were only a handful of people in Australia who would call themselves ‘privacy practitioners’. At this time, the Privacy Act had just been extended to apply to the private sector and there were two sets of privacy principles known as the ‘NPPs’ and the ‘IPPs’. Those early years of the new millennium also produced rapid advances in digitisation and opened the consumer app market with the rolling launches of Android, Facebook, YouTube, Twitter and iPhone, mere ripples in what would become a sea of data-driven practices requiring privacy management.

In 2008, the ALRC released its comprehensive report into the adequacy of Australia’s privacy laws, in which it took the position that ‘as a recognised human right, privacy protection generally should take precedence over a range of other countervailing interests, such as cost and convenience’. The review was an amazing product – three huge volumes of analysis, still referenced today. Despite this, it would take six years before most of recommendations (including the unified APPs) were enacted. Many of the report’s themes are back on the table in the current review of the Privacy Act.

In 2009, there were three management-level privacy roles advertised in Sydney, and I suspect that the number of purely privacy advisory roles were similarly few. By comparison, there are countless ads for roles with privacy accountabilities today, of which the best ones are at elevenM. 😊

My first role in privacy management was in eHealth where privacy was understood to be paramount to trust in the emerging digital system. Our privacy team was most aligned to a compliance function and like many of the client teams we see today, busy with bespoke privacy impact assessments (PIAs) as well as reviewing technical requirements, contributing to draft legislation, and addressing the concerns of diverse stakeholders. Although we were run off our feet (and in fact the organisation held very little personal information), in 2009 the idea of automation to undertake privacy operational tasks did not arise.

My next move, to a large retail bank, was characterised by transformation. We stared into the new concept of ‘digital trust’ which had currency overseas, to inform our privacy strategy. The team operated an internal consultancy, delivering PIAs, managing data breaches and dealing with myriad other emerging issues. As the bank rapidly pursued innovative customer and enterprise innovations, while seeking to remain compliant and engender trust, my team faced an unsurmountable volume of requests to size up and manage the privacy impacts.

It became clear to me that a scalable and automated PIA solution for the Australian market was needed, and I set out to find one. The best option I found (but did not pursue) was to outsource PIAs to one of the new privacy consultancies in the market. Our team continued to deliver against the growing needs of our internal customers. It was already evident that no amount of human capital would be enough to future-proof demand. It should be said that some of today’s market-dominant privacy solutions were already out there, but adoption was not commonplace.

In recent years, we have seen a significant blurring of the roles played by privacy, data governance and information security teams. Responsibilities have moved, morphed and evolved. For example, tasks which were previously the domain of data governance or were entirely neglected (such as inventories, mapping, data retention and maintenance) have drifted into the work of privacy teams. Incident management often sits between privacy and cyber teams with legal and other stakeholders. Vendor assessment has become a multidisciplinary process undertaken by security, privacy, data governance, compliance, procurement personnel and others. What we are seeing has validated our firm’s objective of delivering services which combine these disciplines. It has also highlighted the need for enterprise collaboration and risk management software.

Amongst most of our clients, we are also seeing that the tsunami of data that every organisation now holds is increasing demands for privacy expertise. The speed and scale in which all organisations can now view, collect, create, use and share data would not have been believed in the early 2000s. Factors behind this are the emergence of cloud-based services, the comparative reduction in the costs of data storage, the willingness of companies to outsource key functions and the seeming desire of organisations to analyse every piece of data that they have ever collected or might infer. We’ve also had significant tightening up of laws (think European and APAC changes, as well as mandatory PIAs for Commonwealth agencies and reporting of breaches). Operational privacy can no longer be managed using the same processes that teams used 20 years ago.

Today, there is no way that a person (or even 10 people) with a spreadsheet (or 100 spreadsheets) in any large enterprise can definitively map data flows or inventory an organisation’s data holdings, whilst risk assessing all material initiatives, responding to data breaches and data subject requests and inquiries. We have a scaling problem. And hence, transformation in privacy will be necessary for survival; in fact, the tipping point is here.

Yet, whilst today there are tools offered by hundreds of vendors for privacy assessment, consent management, data mapping, data subject requests, incident response and notification, scanning, mapping, discovery, de-identification and more[1], take-up in the Australian market has so far been patchy.

Every privacy professional, CIO, CISO and CDO needs to know about these tools. And every privacy leader should be thinking about how to implement the tools and hence, how to build their teams of the future. In my next blog I will be imagining a new way forward. What should organisations look for in a technology solution? Is it possible to buy the turn-key solution to end your privacy woes? And what skills will be needed in the privacy workforce of the future?


[1] As a starting point, you might like to read the IAPP’s 2021 TechVendorReport, featuring a mere 245 pages of privacy vendors grouped by product category.

Parenting, privacy and the future

elevenM Principal Melanie Marks reflects on the need for better public engagement on the future of privacy, much as is emerging around climate change.

Sitting at a symposium on children’s online privacy a little over a week ago, the footsteps of a thousand marching children ran through my head. Just streets away from the event was the Melbourne base of the national climate change rallies. In the room was an inspiring group of academics, regulators and policy makers, grappling with the future of surveillance and profiling. Both social issues, concerning the future of humanity.

I pondered why the climate rally had delivered so many to the streets now, when we have known about climate change for years? And I concluded that what has changed is that we can see it now, and we are parents. We see the world through our kids’ eyes – strangely short seasons, farmers without water and a disappearing reef are some recent reflections in the Marks household.

Privacy harm is more nebulous. The potential policy issues are hard to solve for and engaging the public even more difficult.

Who could blame consumers for tapping out of this conundrum? According to symposium speaker, Victoria’s Information Commissioner Sven Bluemmel, the model which underpins global privacy frameworks is broken. It rests on the idea that we the consumers are free, informed and empowered to make decisions. But the paradigm is stretched. AI, if done properly, reaches unexpected conclusions. If you can’t explain it, says Bluemmel, you can’t consent to it. If you don’t collect information – you merely infer it – the moment to ask for consent never comes. Add to this, the privacy paradox – consumers say we care, but at crunch time our behaviours bely these words. We are bombarded with notices, consents, terms and conditions, breach notifications, etc. etc. etc. We are fatigued. We would need 65 hours every day to read all this content.

As a result, we consumers are blindly making a trade-off every time we accept terms and conditions we do not read and cannot understand, weighing benefits now (e.g. convenience, incentives, personalisation) against future, undefined harms.

One of the guests at the symposium asks why aren’t the regulators doing more to beat corporations with the available sticks? But this is the wrong question – I mean, it’s a good question, deserving of a response which features cold hard facts, plans and budgets but it is not the right question, right now. The real question is what kind of future we want for ourselves and for our kids.

Do we want to be known in all places, spaces, channels, forums? Do we want decisions to be made by others for us, by stealth? Do we want our mistakes and those of our offspring to be recorded for posterity? Do we want others to film, record and analyse our faces, our moods, our behaviours? Do we want to be served a lifetime of curated content?

The speakers at the symposium present a range of perspectives. Youth ambassadors working with OVIC share a generational view on privacy – they do care about privacy but it’s got a 2019 twist – for a generation who is used to sharing their lives on social media, privacy is controlling what people know about us, how they use it and who they are.

Prof. Neil Selwyn of Monash University Faculty of Education talks about the impacts of datafication and reductionism (defined as: “reducing and turning things into data – remove context and focus on specific qualities only”) in schools: Robots to replace teachers who identify and address isolation in the classroom with 71% accuracy (so low?); school yard surveillance to catch antisocial behaviours; the shift to using this surveillance data for implementing anticipatory control of behaviour rather than the ‘old school’ disciplinary response. Selwyn talks about the ease with which these technologies have rolled into schools. They are a soft touch, a “loss leader” for introducing new technologies to giant sections of our society without the necessary debate, easing the way for subsequent roll out in hospitals and other public spaces.

Another perspective is that of Tel Aviv University’s Sunny Kalev, who talks about the breakdown of boundaries between our key spaces – school, home, physical and virtual. There is now data transfer between these domains for children, basically meaning there is no escape – no place is private. He called on the words of Edward Snowden: “A child born today will grow up with no conception of privacy at all.  They’ll never know what it means to have a private moment to themselves, an unrecorded, analysed thought. That’s a problem because privacy matters. Privacy is what allows us to determine who we are and who we want to be.”

Dr Julia Fossi is more optimistic about what the eSafety Commissioner’s Office has been able to achieve, imagining a future where we can work with big tech to deploy AI-driven technologies to keep us safe, protect the most vulnerable of us, and send help to people in crisis. It’s the utopian vision that presents a counterpoint to my ‘do we want to be known’ questions. It also gives a little dose of optimism – that it is possible to find that middle ground of embracing the best of technology while protecting what matters.

The challenge is building the same kind of consensus around the more nebulous privacy harms as exists around safety, or as is emerging around climate change.