Breakdown of the Optus breach response

elevenM Principal Arjun Ramachandran takes a critical look at the communications response to a major data breach.

Crisis communications for a data breach are never easy. Things move fast, much is unclear, and it’s not always obvious how to apply well-established crisis communications principles to cyber security incidents. Commenting from the outside is always easier than having to make the calls from the middle of the maelstrom. 

Nevertheless, as comms people do, a few friends and I recently exchanged opinions about Optus’ public comms response to its recent cyber-attack, just as that response was unfolding. Below is a summary of some of my take-outs.

Overall, I reckon Optus put out a largely constructive response to what looks, at this stage, to be a serious data breach. The highlights? Responsive, empathetic, transparent, and largely free of speculation.  But let’s go into more detail below. 

First, what Optus didn’t do so well 

Victim-playing … kinda. The first quote from Optus CEO Kelly Bayer Rosmarin in the Optus statement (which was bound to get a run in all media coverage) read: “We are devastated to discover that we have been subject to a cyber-attack that has resulted in the disclosure of our customers’ personal information to someone who shouldn’t see it”

“Devastated to discover … we have been subject to” – with this language, Optus strays towards painting itself as the victim. In some sense it is, but for a public response the only victims Optus should be focused on are the customers it was meant to protect. Which brings us to …  

Stepping back, not forward? Optus’ use of a string of passive phrases (“Devastated to discover”, “we have been subject to”, “that has resulted in”) comes across as Optus trying to create distance between it and responsibility for the breach. This won’t sit well, especially for those impacted.

Optus is ultimately responsible for protecting its customers’ data, and for any breach. Imagine a bank saying: “Today we were devastated to discover that we were subjected to a robbery that resulted in customers’ jewellery and valuables being taken by people who shouldn’t have had it.” Most would think: “Whatever dude, you were meant to protect it”. (Update: Today show’s Karl Stefanovic response this morning sums up this sentiment). The only vibe to convey is one of accountability. 

The use of “devastated” also felt overly emotive. In subsequent media appearances Rosmarin replaced “devastated” with “deeply disappointed” and “deeply sorry”, which more precisely strikes the tone of regret and contrition needed. 

What Optus did well 

In the final washup, the above issues weren’t overly influential because Optus actually did a lot right. 

Responsive. According to SMH, Optus disclosed this incident publicly after finding out about it late the previous day. That’s relatively pretty quick, despite some commentary. Companies can sit on these things for days, weeks and even months as they evaluate what’s happened. 

They showed contrition. Optus made clear it was “deeply sorry”, “very sorry” and well, “devastated”, by what had happened. Expressing empathy, understanding and regret for the potential harm (not “inconvenience”!) to individuals of a data breach is merely the other side of the accountability coin. 

They didn’t speculate. In pursuit of transparency (a well-known crisis communications principle), companies dealing with a data breach often fall into the trap of speculating or guessing about the details. This is dangerous and potentially embarrassing, especially if those details later need to be corrected once investigations progress. While media reports variously described “millions” or 2.8 million customers being affected, Optus repeatedly held the line against confirming any number (going only with “a significant number”), on the basis it is still investigating.  (Note, the flipside risk of this approach are the media outlets reporting a breach affecting “up to 10 million customers”, on the basis that this is how many customers Optus has).

Transparency, the cyber way.  Optus also clearly understands that transparency around cyber breaches is not just about conveying breach details. Their statement describes in detail the actions it was taking once the incident was known, including containment actions, investigations having commenced, and the rationale around communications decisions. All of these details shed light on how the situation is being managed. 

A banner link at the top of their website to a dedicated page containing their latest statement on the incident and FAQs is also best-practice for cyber incidents. It gives customers a single place to go for the latest information.

They used lots of active language. Notwithstanding earlier criticisms about the passive sections in the CEO quote, large parts of the Optus statement were actually in active voice (see image below). There’s a well-worn cliché in security – “it’s not a matter of if, but when you suffer an attack”. When the attack comes, you need to be swinging into action fast to contain, understand and otherwise respond to what’s happened – which helps demonstrate you are taking accountability for what’s happened and what comes next. The active language conveys Optus doing that.  

They brought in the big guns. “Optus is working with the Australian Cyber Security Centre to mitigate any risks to customers. Optus has also notified the Australian Federal Police, the Office of the Australian Information Commissioner and key regulators.” 

As a major telco, no doubt Optus has well-resourced cyber security and privacy teams. It’s nevertheless helpful to emphasise that you’ve engaged the authorities for help and are working with regulators openly. 

And the small mercies … No trite mentions of how much it “takes security very seriously”. Yes Optus! 

elevenM collaborates with IPC NSW for PAW 2022

elevenM is excited to be collaborating with the Information and Privacy Commission NSW (IPC) for Privacy Awareness Week 2022 to help NSW government agencies in the management of privacy risks. 

We’ve partnered with IPC on the development of a new Privacy Impact Assessment (PIA) questionnaire that NSW public sector agencies can use to assess their websites for privacy risks. By using this tool, NSW Government agencies can draw on industry best practices to more efficiently assess privacy risks and identify remediation actions. 

The new IPC tool draws from elevenM’s PIA and privacy tooling suite. Anyone that’s done a PIA will know that PIA tools come in many shapes and sizes.  

There are tools for specific industries and business contexts and for individual jurisdictions and legal frameworks. Some tools function primarily as questionnaires, others offer guidance and recommendations. There are tools designed to be used by privacy experts, while others bake-in expert knowledge so they can be used by anyone in the business. 

elevenM’s privacy experts have worked with all these kinds of PIA tools, using them with many business clients and in a variety of contexts. We’ve drawn on this collective experience to create a library of PIA and privacy tools that is most useful and practical. 

If you’d like more information about our PIA tools, please contact us at 

For more information about our collaboration with IPC, please refer to their PAW 2022 webpage.

When it’s all by design

elevenM Principal Arjun Ramachandran reflects on the explosion of “by design” methodologies, and why we must ensure it doesn’t become a catchphrase.

Things catch on fast in business.

Software-as-a-service had barely taken hold as a concept before enterprising outfits saw opportunities to make similar offerings up and down the stack. Platform-as-a-service and infrastructure-as-a-service followed swiftly, then data-as-a-service.

Soon enough, the idea broke free of the tech stack entirely. There emerged CRM-as-a-service, HR-as-a-service and CEO-as-a-service.

“As a service” could in theory reflect a fundamentally new business model. Often though, simply appending the words “as a service” to an existing product gave it a modern sheen that was “on trend”. Today, you can get elevators-as-a-service, wellness-as-a-service and even an NFT-as-a-service.

A few days ago, I came across a hashtag on Twitter – #trustbydesign – that gave me pause about whether something similar was underway in an area closer to home to me professionally.

For those in privacy and security, the “by design” imperative is not new. Nor is it trite.

“Privacy by design” – in which privacy considerations are baked into new initiatives at design phase, rather than remediated at the end – is a core part of modern privacy approaches. In a similar way, “secure by design” is now a familiar concept that emphasises shifting security conversations forward in the solution development journey, rather than relegating them to bug fixes or risk acceptances at the end.

But could we be entering similar territory to the as-a-service crew? For those involved broadly in the pursuit of humanising tech, on top of privacy by design and secure by design there are now exclamations of safety by design, resilience by design, ethical by design, care by design, empathy by design and the aforementioned trust by design.

Don’t get me wrong, I love a good spin-off. But as we continue to promote doing things “by design”, it’s worth keeping an eye to its usage and promotion, so it doesn’t become a hollow catchphrase at the mercy of marketing exploitation (for a parallel, see how some security peeps are now vigorously standing up to defend “zero trust”, a security approach, against assertions that it’s “just a marketing ploy”).

Doing things “by design” is important and valuable. It speaks to a crystalising of intent. A desire to do things right, and to do them up front. In fields like privacy and security, where risks have historically been raised late in the piece or as an afterthought (and sometimes ignored as a result), the emergence and adoption of “by design” approaches is a welcome and impactful change.

As “by design” catches on as a buzzword, however, it’s vital we ensure there’s substance sitting behind each of its variants. Consider the following two examples.

Privacy by design
Privacy Impact Assessments are a rigorous, systematic and well-established assessment process that provides structure and tangible output to the higher intent of “privacy by design”. Regulators like the OAIC endorse their use and publish guidance on how to do them. At elevenM, we live and breathe PIAs. Whether undertaking detailed gap analyses and writing reports (narrative, factual, checklist based, metric based, anchored to organisational risk frameworks, national or international), training clients on PIAs or supporting them with automated tools and templates, we’re making the assessment of privacy impacts – and therefore privacy – easier to embed in project lifecycles. 

Ethics by design
The area of data ethics is a fast-emerging priority for data-driven businesses. We’ve been excited to work with clients on ways of designing and implementing ethical principles, including through the development of frameworks and toolkits that enable these principles to be operationalised into actions that organisations can take to make their data initiatives more ethical by design.

At a minimum, a similar structured framework or methodology should be articulated for any “by design” philosophy.

A final consideration for businesses is the need to synthesise these “by design” approaches as they take hold. There’s some risk that these various imperatives – privacy, security, data governance, ethics – will compete and clash as they converge at the design phase. It’ll be increasingly vital to have teams with cross-disciplinary capability or expertise who can efficiently integrate the objectives and outcomes of each area towards an overall outcome of greater trust.

We leave the closing words to Kid Cudi: “And the choices you made, it’s all by design”.

If we can help you with your “by design” approaches, reach us at

Photo by davisuko on Unsplash

Has the cookie crumbled?

elevenM’s Chaitalee Sohoni dives into the what and why of third-party cookies, Google’s plan to phase them out and what this means for businesses and individuals alike.

By 2023, Google Chrome will phase out support for third-party cookies as part of its Privacy Sandbox Initiative with Stage 1 set to start by late 2022.

Google first announced its intention to eliminate third-party cookies from its Chrome browser in early 2020 and made it explicit that they ‘will not build alternate identifiers to track individuals as they browse across the web‘.

If you have been on a website in the last couple of years, you might have encountered an annoying pop-up inviting you to read the company’s ‘cookie policy’ and review your cookie preferences. Chances are you clicked ‘agree’ without reading it and moved on to the content of the page, mostly because privacy policies are tedious to read. The cookie policy on any website is essentially notifying you that a cookie is downloaded to your computer to ‘enhance’ your browsing experience each time you visit the website.

But what exactly are cookies and how do they affect you?

A cookie is a piece of data in the form of small text files that are unique to each user. When you visit a new website, cookies are created to identify you and personalise your experience based on your browsing history.

While cookies aren’t bad, what we choose to do with them is problematic because it raises concerns about data privacy.

Cookies were invented by Lou Montulli in 1994 and have since been the backbone of internet browsing experience. Cookies are created to remember and recall information that is useful while browsing, such as log in information or the previous page on a website. Without cookies, browsing the internet would be an extremely frustrating process — imagine adding an item to your cart when you shop online, and having it disappear each time you go back to add more items. Think Dory from Finding Nemo.

There are two kinds of cookies: First-party cookies and third-party cookies. First-party cookies are created and downloaded from the primary website you are visiting.

Third-party cookies, however, are generated and saved on your computer by multiple websites whose information is embedded on the primary website you browse. For example, when you visit a website, it’ll most likely contain advertisements or images from other websites or even a Facebook ‘like’ button. Even if you don’t click on them, cookies from their websites are created and stored on your system.

If you have ever had an advertisement follow you around on the internet, it is because of third-party cookies. Based on the websites you visit, cookies gather a great deal of information about you such as your age bracket, gender, location, interests, personal preferences etc. Advertising companies use cookies to track your activity on the internet by building a profile of your interests based on your browsing history to send you personalised advertisements. Cookies allow companies to make more money by helping them find the right audience for their products. Platforms such as Facebook and Google are heavily incentivised to ensure advertisements from brands reach the targeted users.

With its Sandbox Initiative, Google aims to withdraw support for third-party cookies. At first glance, this move appears to be a step in the right direction for data privacy, but Google is a tad late to this party. Mozilla’s Firefox, Apple’s Safari and Brave blocked third-party cookies years ago, making them more privacy robust browsers. There’s also DuckDuckGo, a more secure search engine that also offers a browser for mobile phones.

Google may not be the first to ban cookies but Chrome is the most popular browsing platform with a global web browsing market share of 64.4% as of January 2022, which is significant when compared to Safari or Firefox, which only account for 16.9% and 3.9%, respectively. And so, Google’s plan to phase out cookies is a big deal in the world of internet.

With Google hopping on the bandwagon, does this spell the end for third-party cookies? Maybe. Does it mean that your browsing history won’t be tracked anymore? The answer is not that simple.

Eliminating third-party cookies does remove the power advertising companies have in terms of tracking individuals, but it places that power directly into Google’s hands. With Chrome not relying on third-party cookies to collect data about users, Google will no longer support companies in selling targeted web advertisements to individuals. This move will give Google an upper hand in collecting first-party data from users including collecting data from mobile applications to which the cookie ban doesn’t apply.

Google’s move will have a drastic impact on businesses and advertisers as they will need to rely heavily on first-party data or find alternatives to reach their audiences. In a joint statement, the Association of National Advertising and the American Association of Advertising Agencies have pointed out that ‘Google’s decision to block third-party cookies in Chrome could have major competitive impacts for digital businesses, consumer services, and technological innovation.’

Proposed legislative changes in this area will also have a bearing on businesses. In the review of the Privacy Act currently underway, one of the proposed changes includes replacing ‘about’ with ‘related to’ in the definition of personal information in the Privacy Act 1988. The purpose of this change is to explicitly bring more technical identifiers such as IP addresses or unique, persistent identifiers used in cookies within the scope of the Act. Under this new definition, unique identifiers are very likely to be considered personal information and this change will therefore have a bearing on the use of cookies by websites that depend on unique identifiers to track individuals.

Google initially wanted to replace third-party cookies with Federated Learning of Cohorts (FLoCs). FLoCs was designed to track individuals based on their web browsing to group them into cohorts that were defined by similar interests. However, in January this year, Google announced that it was replacing FLoCs with Topics. Topics is also built on the idea of interest-based advertising where the browser determines top interests for users based on their browsing history stating ‘it provides you with a more recognizable way to see and control how your data is shared, compared to tracking mechanisms like third-party cookies.’

Google is still exploring options to fulfil its promise to phase out the use of third-party cookies by 2023, a delay from its initial plan to phase them out by 2022. We may have to wait a little longer to see how third-party cookies will be replaced by Google.

[UPDATE: An earlier version of this post stated Google intended to replace third-party cookies with Federated Learning of Cohorts (FLoCs), however it has now opted to replace them with Topics.]

The enduring modern privacy challenge

Privacy has evolved considerably since ancient thinkers first wrote about it as a concept. In this post, elevenM’s Arjun Ramachandran plots this evolution and discusses how finding skilled people has become the enduring modern challenge for the privacy profession. We also preview elevenM’s upcoming webinar.

In his musings over privacy, Aristotle could hardly have contemplated the complex challenges that would be heralded by the digital future.

The ancient Greek philosopher drew a distinction between private family life (oikos) and the public sphere of political affairs (polis). While this conception of the division between domestic life and the state remains valid, the landscape today is so much more complex.

Our lives today play out and are made meaningful by our existence in an intricate web of relationships – with other people, with businesses and with information – largely mediated through our daily use of various digital services and participation in online platforms.

The modern privacy challenge – as distinct from Aristotle’s paradigm where privacy was more synonymous with domestic life – is perhaps to locate and define privacy within this more complex picture. In this context, our experiences of privacy are increasingly not simply the result of our personal choices (such as deciding to remain in the privacy of the family home). Instead, they’re dictated by how this broader digital and information ecosystem – one in which we all must necessarily participate – is engineered.

Banks, government agencies, streaming services, social media platforms … and all manner of services have now become, by default, digital businesses. So it is that the digital platforms we use to communicate and the organisations with which we share our information have become outsized gatekeepers of our privacy.

We know that there is strong community sentiment in favour of privacy which provides direction for businesses seeking as us customers. Regulations such as The Privacy Act (1988) and the EU’s GDPR also set legal baselines for how privacy must be protected. But realising these outcomes ultimately falls to these organisations and how they collectively handle our information.

In our work with many of these companies, we’ve seen growing intent and motivation over recent years to get privacy right. There has been steady investment in establishing dedicated privacy teams and building privacy processes and programs.

But when their privacy capability reaches a certain point of maturity, many organisations appear to hit the same wall: an inability to find skilled professionals to keeping doing the good work of privacy. “The global shortage of privacy professionals” has sadly become a well-documented state of affairs.

The challenge will only intensify in coming years as digitisation expands, with growing use of data-driven technologies like machine learning and artificial intelligence. This is to say nothing of the review of the Privacy Act currently underway, and the expanded compliance requirements it will bring.

At elevenM, we’ve been talking about this problem at length amongst ourselves and with some of our industry colleagues. We’d love to open up this problem to a broader audience – such is its breadth and criticality that we believe it requires a truly collective approach.

On February 8, elevenM’s Melanie Marks and Jordan Wilson-Otto, in partnership with the International Association of Privacy Professionals, will deliver a presentation diving deeper into the talent drought and exploring solutions. The presentation will be followed by a multidisciplinary panel discussion featuring leaders in privacy, academia and industry.

If you’d like to attend, click the link below to register.

A Lawyer, a Chemist, an Ethicist, a Copywriter and a Programmer Walk Into a Bar

Melanie Marks, CIPP/E, CIPM, FIP, Principal, ElevenM
Jordan Wilson-Otto, Principal, ElevenM

Chantelle Hughes, Executive Manager, Technology Community, Commonwealth Bank
Sacha Molitorisz, Lecturer, Centre for Media Transition, University of Technology Sydney
Jacqui Peace, AU/NZ Privacy Management, Google
Daimhin Warner, CIPP/E, Principal & Director, Simply Privacy

Towards a safer online world for children and the vulnerable

elevenM’s Jordan Wilson-Otto shares findings from recent research on the privacy risks and harms for children and vulnerable groups.

Yesterday the Government initiated two consultations (on the Online Privacy Bill and the Privacy Act Review), both of which include a focus on better protecting kids. elevenM worked on key research that informed thinking behind these changes, and we are delighted to share the outputs of that research here.

Our research was commissioned by the Office of the Australian Information Commissioner and conducted in partnership with two leading academics from Monash Law School (Normann Witzleb and Moira Paterson). It provides an in-depth analysis of the privacy risks and harms that can arise for children and for other vulnerable groups online and makes recommendations for additional protections that could be put in place to mitigate these risks.

We’re proud of our contribution, and we hope it might serve as a useful reference for those drafting submissions to the Online Privacy Bill exposure draft and the Privacy Act Review Discussion Paper.

If you don’t have time to read the whole report, here are some of our key findings:

  • Children can be vulnerable online due to limitations in their basic and digital literacy, cognitive abilities and capacity for future-focused decision making.
  • Individual characteristics and situational factors shape susceptibility to harm, even for adults. Vulnerability is dynamic and contextual, and its causes are complex and varied. Identifying individuals who need greater protection is not always straightforward.
  • Children and other vulnerable groups face a wide variety of harms online. Mostly, these arise from monetisation of their personal information and the manipulation of their behaviour, but also from the social impacts of sharing personal information on their reputation and life opportunities, and e-safety risks. But it’s also important to remember that no environment is risk free, and participation, the right to take one’s own risks and the development of digital skills are also important goals.
  • Digital platforms have all adopted measures aimed at protecting children and other vulnerable groups. However, these are highly variable between platforms and are often difficult to navigate and limited in their effectiveness.
  • There is an international trend towards implementing additional privacy protections for children, with UK the most advanced, followed by the EU and the USA. We review enhanced privacy protections for children and other vulnerable groups across these jurisdictions.
  • Reliance on consent should be limited. In most cases with social media and digital platforms, even adults are not able to understand the conditions they’re agreeing to.
  • For everyone, but particularly for kids, privacy transparency should aim for more than mere disclosure of material facts, and should instead aim to educate, empower and enable privacy self-management in line with a child’s developing needs and capabilities.

Finally, we made a range of recommendations for additional protections that could be put in place either via an Online Privacy Code, or as part of the broader review of the Privacy Act in order to better protect our most vulnerable. Some of the key ones are:

  • To establish an overriding obligation to handle personal information lawfully, fairly and reasonably, taking into account the best interests of the child (where children are involved).
  • To place a greater onus on platforms to verify users’ age (taking into account any privacy risks arising from verification measures)
  • To strengthen requirements for consent, taking into account whether it is reasonable to expect that an individual understands what it is that they are consenting to.
  • To strengthen privacy transparency requirements, including requirements to collect engagement metrics for privacy notifications and privacy features, and to demonstrate that steps taken to ensure user awareness of privacy matters are reasonable.

We’re encouraged to see that the focus on better protecting kids online is already generating national and international headlines, and hope this research will play a role in steering us towards the right reforms. Ultimately, any new code or reforms to the Privacy Act must not only protect children and vulnerable groups from online risks, but also enable them to fully access and participate in the benefits of the online world.

If you would like to discuss this research in more detail, or would like us to assist you to understand the broader Privacy Act changes being considered, drop us a line at