Breakdown of the Optus breach response

elevenM Principal Arjun Ramachandran takes a critical look at the communications response to a major data breach.

Crisis communications for a data breach are never easy. Things move fast, much is unclear, and it’s not always obvious how to apply well-established crisis communications principles to cyber security incidents. Commenting from the outside is always easier than having to make the calls from the middle of the maelstrom. 

Nevertheless, as comms people do, a few friends and I recently exchanged opinions about Optus’ public comms response to its recent cyber-attack, just as that response was unfolding. Below is a summary of some of my take-outs.

Overall, I reckon Optus put out a largely constructive response to what looks, at this stage, to be a serious data breach. The highlights? Responsive, empathetic, transparent, and largely free of speculation.  But let’s go into more detail below. 

First, what Optus didn’t do so well 

Victim-playing … kinda. The first quote from Optus CEO Kelly Bayer Rosmarin in the Optus statement (which was bound to get a run in all media coverage) read: “We are devastated to discover that we have been subject to a cyber-attack that has resulted in the disclosure of our customers’ personal information to someone who shouldn’t see it”

“Devastated to discover … we have been subject to” – with this language, Optus strays towards painting itself as the victim. In some sense it is, but for a public response the only victims Optus should be focused on are the customers it was meant to protect. Which brings us to …  

Stepping back, not forward? Optus’ use of a string of passive phrases (“Devastated to discover”, “we have been subject to”, “that has resulted in”) comes across as Optus trying to create distance between it and responsibility for the breach. This won’t sit well, especially for those impacted.

Optus is ultimately responsible for protecting its customers’ data, and for any breach. Imagine a bank saying: “Today we were devastated to discover that we were subjected to a robbery that resulted in customers’ jewellery and valuables being taken by people who shouldn’t have had it.” Most would think: “Whatever dude, you were meant to protect it”. (Update: Today show’s Karl Stefanovic response this morning sums up this sentiment). The only vibe to convey is one of accountability. 

The use of “devastated” also felt overly emotive. In subsequent media appearances Rosmarin replaced “devastated” with “deeply disappointed” and “deeply sorry”, which more precisely strikes the tone of regret and contrition needed. 

What Optus did well 

In the final washup, the above issues weren’t overly influential because Optus actually did a lot right. 

Responsive. According to SMH, Optus disclosed this incident publicly after finding out about it late the previous day. That’s relatively pretty quick, despite some commentary. Companies can sit on these things for days, weeks and even months as they evaluate what’s happened. 

They showed contrition. Optus made clear it was “deeply sorry”, “very sorry” and well, “devastated”, by what had happened. Expressing empathy, understanding and regret for the potential harm (not “inconvenience”!) to individuals of a data breach is merely the other side of the accountability coin. 

They didn’t speculate. In pursuit of transparency (a well-known crisis communications principle), companies dealing with a data breach often fall into the trap of speculating or guessing about the details. This is dangerous and potentially embarrassing, especially if those details later need to be corrected once investigations progress. While media reports variously described “millions” or 2.8 million customers being affected, Optus repeatedly held the line against confirming any number (going only with “a significant number”), on the basis it is still investigating.  (Note, the flipside risk of this approach are the media outlets reporting a breach affecting “up to 10 million customers”, on the basis that this is how many customers Optus has).

Transparency, the cyber way.  Optus also clearly understands that transparency around cyber breaches is not just about conveying breach details. Their statement describes in detail the actions it was taking once the incident was known, including containment actions, investigations having commenced, and the rationale around communications decisions. All of these details shed light on how the situation is being managed. 

A banner link at the top of their website to a dedicated page containing their latest statement on the incident and FAQs is also best-practice for cyber incidents. It gives customers a single place to go for the latest information.

They used lots of active language. Notwithstanding earlier criticisms about the passive sections in the CEO quote, large parts of the Optus statement were actually in active voice (see image below). There’s a well-worn cliché in security – “it’s not a matter of if, but when you suffer an attack”. When the attack comes, you need to be swinging into action fast to contain, understand and otherwise respond to what’s happened – which helps demonstrate you are taking accountability for what’s happened and what comes next. The active language conveys Optus doing that.  

They brought in the big guns. “Optus is working with the Australian Cyber Security Centre to mitigate any risks to customers. Optus has also notified the Australian Federal Police, the Office of the Australian Information Commissioner and key regulators.” 

As a major telco, no doubt Optus has well-resourced cyber security and privacy teams. It’s nevertheless helpful to emphasise that you’ve engaged the authorities for help and are working with regulators openly. 

And the small mercies … No trite mentions of how much it “takes security very seriously”. Yes Optus! 

elevenM collaborates with IPC NSW for PAW 2022

elevenM is excited to be collaborating with the Information and Privacy Commission NSW (IPC) for Privacy Awareness Week 2022 to help NSW government agencies in the management of privacy risks. 

We’ve partnered with IPC on the development of a new Privacy Impact Assessment (PIA) questionnaire that NSW public sector agencies can use to assess their websites for privacy risks. By using this tool, NSW Government agencies can draw on industry best practices to more efficiently assess privacy risks and identify remediation actions. 

The new IPC tool draws from elevenM’s PIA and privacy tooling suite. Anyone that’s done a PIA will know that PIA tools come in many shapes and sizes.  

There are tools for specific industries and business contexts and for individual jurisdictions and legal frameworks. Some tools function primarily as questionnaires, others offer guidance and recommendations. There are tools designed to be used by privacy experts, while others bake-in expert knowledge so they can be used by anyone in the business. 

elevenM’s privacy experts have worked with all these kinds of PIA tools, using them with many business clients and in a variety of contexts. We’ve drawn on this collective experience to create a library of PIA and privacy tools that is most useful and practical. 

If you’d like more information about our PIA tools, please contact us at hello@elevenm.com 

For more information about our collaboration with IPC, please refer to their PAW 2022 webpage.

When it’s all by design

elevenM Principal Arjun Ramachandran reflects on the explosion of “by design” methodologies, and why we must ensure it doesn’t become a catchphrase.

Things catch on fast in business.

Software-as-a-service had barely taken hold as a concept before enterprising outfits saw opportunities to make similar offerings up and down the stack. Platform-as-a-service and infrastructure-as-a-service followed swiftly, then data-as-a-service.

Soon enough, the idea broke free of the tech stack entirely. There emerged CRM-as-a-service, HR-as-a-service and CEO-as-a-service.

“As a service” could in theory reflect a fundamentally new business model. Often though, simply appending the words “as a service” to an existing product gave it a modern sheen that was “on trend”. Today, you can get elevators-as-a-service, wellness-as-a-service and even an NFT-as-a-service.

A few days ago, I came across a hashtag on Twitter – #trustbydesign – that gave me pause about whether something similar was underway in an area closer to home to me professionally.

For those in privacy and security, the “by design” imperative is not new. Nor is it trite.

“Privacy by design” – in which privacy considerations are baked into new initiatives at design phase, rather than remediated at the end – is a core part of modern privacy approaches. In a similar way, “secure by design” is now a familiar concept that emphasises shifting security conversations forward in the solution development journey, rather than relegating them to bug fixes or risk acceptances at the end.

But could we be entering similar territory to the as-a-service crew? For those involved broadly in the pursuit of humanising tech, on top of privacy by design and secure by design there are now exclamations of safety by design, resilience by design, ethical by design, care by design, empathy by design and the aforementioned trust by design.

Don’t get me wrong, I love a good spin-off. But as we continue to promote doing things “by design”, it’s worth keeping an eye to its usage and promotion, so it doesn’t become a hollow catchphrase at the mercy of marketing exploitation (for a parallel, see how some security peeps are now vigorously standing up to defend “zero trust”, a security approach, against assertions that it’s “just a marketing ploy”).

Doing things “by design” is important and valuable. It speaks to a crystalising of intent. A desire to do things right, and to do them up front. In fields like privacy and security, where risks have historically been raised late in the piece or as an afterthought (and sometimes ignored as a result), the emergence and adoption of “by design” approaches is a welcome and impactful change.

As “by design” catches on as a buzzword, however, it’s vital we ensure there’s substance sitting behind each of its variants. Consider the following two examples.

Privacy by design
Privacy Impact Assessments are a rigorous, systematic and well-established assessment process that provides structure and tangible output to the higher intent of “privacy by design”. Regulators like the OAIC endorse their use and publish guidance on how to do them. At elevenM, we live and breathe PIAs. Whether undertaking detailed gap analyses and writing reports (narrative, factual, checklist based, metric based, anchored to organisational risk frameworks, national or international), training clients on PIAs or supporting them with automated tools and templates, we’re making the assessment of privacy impacts – and therefore privacy – easier to embed in project lifecycles. 

Ethics by design
The area of data ethics is a fast-emerging priority for data-driven businesses. We’ve been excited to work with clients on ways of designing and implementing ethical principles, including through the development of frameworks and toolkits that enable these principles to be operationalised into actions that organisations can take to make their data initiatives more ethical by design.

At a minimum, a similar structured framework or methodology should be articulated for any “by design” philosophy.

A final consideration for businesses is the need to synthesise these “by design” approaches as they take hold. There’s some risk that these various imperatives – privacy, security, data governance, ethics – will compete and clash as they converge at the design phase. It’ll be increasingly vital to have teams with cross-disciplinary capability or expertise who can efficiently integrate the objectives and outcomes of each area towards an overall outcome of greater trust.

We leave the closing words to Kid Cudi: “And the choices you made, it’s all by design”.

If we can help you with your “by design” approaches, reach us at hello@elevenm.com

Photo by davisuko on Unsplash

elevenM’s submission to the Privacy Act Review

In its current form the Privacy Act is not fit for purpose for the modern digital economy – as has been widely observed, including in our previous posts

It doesn’t adequately support consumers to understand how their information is to be handled or give them assurance that they have any control over such handling.  

It doesn’t aid consumers to make informed and impactful decisions.  

Its cornerstones of consent and notice are outdated and no longer effective.  

That’s why we are grateful to have the opportunity to contribute to the current review of the Privacy Act. 

Our submission to the Privacy Act Review Discussion paper has recently been published by the Attorney General’s department. You can read it in full here

We welcome any feedback. Please get in touch at hello@eleven.com

The enduring modern privacy challenge

Privacy has evolved considerably since ancient thinkers first wrote about it as a concept. In this post, elevenM’s Arjun Ramachandran plots this evolution and discusses how finding skilled people has become the enduring modern challenge for the privacy profession. We also preview elevenM’s upcoming webinar.

In his musings over privacy, Aristotle could hardly have contemplated the complex challenges that would be heralded by the digital future.

The ancient Greek philosopher drew a distinction between private family life (oikos) and the public sphere of political affairs (polis). While this conception of the division between domestic life and the state remains valid, the landscape today is so much more complex.

Our lives today play out and are made meaningful by our existence in an intricate web of relationships – with other people, with businesses and with information – largely mediated through our daily use of various digital services and participation in online platforms.

The modern privacy challenge – as distinct from Aristotle’s paradigm where privacy was more synonymous with domestic life – is perhaps to locate and define privacy within this more complex picture. In this context, our experiences of privacy are increasingly not simply the result of our personal choices (such as deciding to remain in the privacy of the family home). Instead, they’re dictated by how this broader digital and information ecosystem – one in which we all must necessarily participate – is engineered.

Banks, government agencies, streaming services, social media platforms … and all manner of services have now become, by default, digital businesses. So it is that the digital platforms we use to communicate and the organisations with which we share our information have become outsized gatekeepers of our privacy.

We know that there is strong community sentiment in favour of privacy which provides direction for businesses seeking as us customers. Regulations such as The Privacy Act (1988) and the EU’s GDPR also set legal baselines for how privacy must be protected. But realising these outcomes ultimately falls to these organisations and how they collectively handle our information.

In our work with many of these companies, we’ve seen growing intent and motivation over recent years to get privacy right. There has been steady investment in establishing dedicated privacy teams and building privacy processes and programs.

But when their privacy capability reaches a certain point of maturity, many organisations appear to hit the same wall: an inability to find skilled professionals to keeping doing the good work of privacy. “The global shortage of privacy professionals” has sadly become a well-documented state of affairs.

The challenge will only intensify in coming years as digitisation expands, with growing use of data-driven technologies like machine learning and artificial intelligence. This is to say nothing of the review of the Privacy Act currently underway, and the expanded compliance requirements it will bring.

At elevenM, we’ve been talking about this problem at length amongst ourselves and with some of our industry colleagues. We’d love to open up this problem to a broader audience – such is its breadth and criticality that we believe it requires a truly collective approach.

On February 8, elevenM’s Melanie Marks and Jordan Wilson-Otto, in partnership with the International Association of Privacy Professionals, will deliver a presentation diving deeper into the talent drought and exploring solutions. The presentation will be followed by a multidisciplinary panel discussion featuring leaders in privacy, academia and industry.

If you’d like to attend, click the link below to register. https://iapp.org/store/webconferences/a0l1P00000DbKYuQA


A Lawyer, a Chemist, an Ethicist, a Copywriter and a Programmer Walk Into a Bar

Presenters:
Melanie Marks, CIPP/E, CIPM, FIP, Principal, ElevenM
Jordan Wilson-Otto, Principal, ElevenM

Panellists:
Chantelle Hughes, Executive Manager, Technology Community, Commonwealth Bank
Sacha Molitorisz, Lecturer, Centre for Media Transition, University of Technology Sydney
Jacqui Peace, AU/NZ Privacy Management, Google
Daimhin Warner, CIPP/E, Principal & Director, Simply Privacy

Pete was right (but just this time)

By Arjun Ramachandran

Pete’s a hardened security professional. He’s been in this game for over 20 years and has the battle scars of many cyber uplift and remediation programs. He feels the pain of CISOs fighting the good fight.

I’m a former journo. Even though I’m no longer actively reporting, the craft matters to me and I get defensive and outraged in equal measures about the quality of print news.

Pete and I butted heads in recent days over the reporting of the NSW Auditor-General’s (AG) report into Transport for NSW and Sydney Trains.

The AG report, and much of the media coverage, was overtly critical. A “litany of cyber security weaknesses [identified in] a scathing review” was how the SMH described it.

Pete wasn’t overly happy about the coverage, feeling both the media reporting and the AG report to be unfair, particularly in the context of how organisations actually go about improving their cyber posture (more on this later).

While I defended the reporting as – for the most part – being a fair and straight write-up of the AG report, I have to concede that on the bigger point Pete was dead right.

There’s a problem in our industry, and in broader society, with how we often talk about and respond to reviews of organisations’ security programs. It’s not that we’re quick to talk about deficiencies that don’t exist or that aren’t serious, but that the way these reviews are presented drives a response and conversation that is counterproductive to good security outcomes.

There are no perfect security programs, or organisations with perfect security. The threat landscape changes daily and security uplifts require complex whole-of-organisation changes that necessarily take a long time. Even in the most mature organisations, “good security” is a work in progress, with gaps continuously identified and addressed. Any detailed review will find holes and inadequacies.

To be clear, this is not an argument to take lightly the adverse findings of a review. Arguably, these findings are a large part of why we do reviews, so that a measured and constructive response to them can lead to improved security.

But too often in our journeys at elevenM we see instances where the supercharged or out-of-proportion delivery of adverse findings leads to an equally supercharged response (sometimes in the form of its own large remediation program) that sees a sizeable redirection of resources, and ultimately the deferral or interruption of well-considered strategies or critical uplift projects.

We found it particularly uncomfortable that the AG report was based on a red team exercise. A red team exercise – where “authorised attackers” (in the words of the AG report) are given permission to try and break into systems – will always find security flaws. These exercises are typically conducted expressly to provide insights to security teams that they can learn from. To broadly publish those findings in the tone and manner that the AG report has done didn’t strike us as constructive.