elevenM collaborates with IPC NSW for PAW 2022

elevenM is excited to be collaborating with the Information and Privacy Commission NSW (IPC) for Privacy Awareness Week 2022 to help NSW government agencies in the management of privacy risks. 

We’ve partnered with IPC on the development of a new Privacy Impact Assessment (PIA) questionnaire that NSW public sector agencies can use to assess their websites for privacy risks. By using this tool, NSW Government agencies can draw on industry best practices to more efficiently assess privacy risks and identify remediation actions. 

The new IPC tool draws from elevenM’s PIA and privacy tooling suite. Anyone that’s done a PIA will know that PIA tools come in many shapes and sizes.  

There are tools for specific industries and business contexts and for individual jurisdictions and legal frameworks. Some tools function primarily as questionnaires, others offer guidance and recommendations. There are tools designed to be used by privacy experts, while others bake-in expert knowledge so they can be used by anyone in the business. 

elevenM’s privacy experts have worked with all these kinds of PIA tools, using them with many business clients and in a variety of contexts. We’ve drawn on this collective experience to create a library of PIA and privacy tools that is most useful and practical. 

If you’d like more information about our PIA tools, please contact us at hello@elevenm.com 

For more information about our collaboration with IPC, please refer to their PAW 2022 webpage.

When it’s all by design

elevenM Principal Arjun Ramachandran reflects on the explosion of “by design” methodologies, and why we must ensure it doesn’t become a catchphrase.

Things catch on fast in business.

Software-as-a-service had barely taken hold as a concept before enterprising outfits saw opportunities to make similar offerings up and down the stack. Platform-as-a-service and infrastructure-as-a-service followed swiftly, then data-as-a-service.

Soon enough, the idea broke free of the tech stack entirely. There emerged CRM-as-a-service, HR-as-a-service and CEO-as-a-service.

“As a service” could in theory reflect a fundamentally new business model. Often though, simply appending the words “as a service” to an existing product gave it a modern sheen that was “on trend”. Today, you can get elevators-as-a-service, wellness-as-a-service and even an NFT-as-a-service.

A few days ago, I came across a hashtag on Twitter – #trustbydesign – that gave me pause about whether something similar was underway in an area closer to home to me professionally.

For those in privacy and security, the “by design” imperative is not new. Nor is it trite.

“Privacy by design” – in which privacy considerations are baked into new initiatives at design phase, rather than remediated at the end – is a core part of modern privacy approaches. In a similar way, “secure by design” is now a familiar concept that emphasises shifting security conversations forward in the solution development journey, rather than relegating them to bug fixes or risk acceptances at the end.

But could we be entering similar territory to the as-a-service crew? For those involved broadly in the pursuit of humanising tech, on top of privacy by design and secure by design there are now exclamations of safety by design, resilience by design, ethical by design, care by design, empathy by design and the aforementioned trust by design.

Don’t get me wrong, I love a good spin-off. But as we continue to promote doing things “by design”, it’s worth keeping an eye to its usage and promotion, so it doesn’t become a hollow catchphrase at the mercy of marketing exploitation (for a parallel, see how some security peeps are now vigorously standing up to defend “zero trust”, a security approach, against assertions that it’s “just a marketing ploy”).

Doing things “by design” is important and valuable. It speaks to a crystalising of intent. A desire to do things right, and to do them up front. In fields like privacy and security, where risks have historically been raised late in the piece or as an afterthought (and sometimes ignored as a result), the emergence and adoption of “by design” approaches is a welcome and impactful change.

As “by design” catches on as a buzzword, however, it’s vital we ensure there’s substance sitting behind each of its variants. Consider the following two examples.

Privacy by design
Privacy Impact Assessments are a rigorous, systematic and well-established assessment process that provides structure and tangible output to the higher intent of “privacy by design”. Regulators like the OAIC endorse their use and publish guidance on how to do them. At elevenM, we live and breathe PIAs. Whether undertaking detailed gap analyses and writing reports (narrative, factual, checklist based, metric based, anchored to organisational risk frameworks, national or international), training clients on PIAs or supporting them with automated tools and templates, we’re making the assessment of privacy impacts – and therefore privacy – easier to embed in project lifecycles. 

Ethics by design
The area of data ethics is a fast-emerging priority for data-driven businesses. We’ve been excited to work with clients on ways of designing and implementing ethical principles, including through the development of frameworks and toolkits that enable these principles to be operationalised into actions that organisations can take to make their data initiatives more ethical by design.

At a minimum, a similar structured framework or methodology should be articulated for any “by design” philosophy.

A final consideration for businesses is the need to synthesise these “by design” approaches as they take hold. There’s some risk that these various imperatives – privacy, security, data governance, ethics – will compete and clash as they converge at the design phase. It’ll be increasingly vital to have teams with cross-disciplinary capability or expertise who can efficiently integrate the objectives and outcomes of each area towards an overall outcome of greater trust.

We leave the closing words to Kid Cudi: “And the choices you made, it’s all by design”.

If we can help you with your “by design” approaches, reach us at hello@elevenm.com

Photo by davisuko on Unsplash

elevenM’s submission to the Privacy Act Review

In its current form the Privacy Act is not fit for purpose for the modern digital economy – as has been widely observed, including in our previous posts

It doesn’t adequately support consumers to understand how their information is to be handled or give them assurance that they have any control over such handling.  

It doesn’t aid consumers to make informed and impactful decisions.  

Its cornerstones of consent and notice are outdated and no longer effective.  

That’s why we are grateful to have the opportunity to contribute to the current review of the Privacy Act. 

Our submission to the Privacy Act Review Discussion paper has recently been published by the Attorney General’s department. You can read it in full here

We welcome any feedback. Please get in touch at hello@eleven.com

The enduring modern privacy challenge

Privacy has evolved considerably since ancient thinkers first wrote about it as a concept. In this post, elevenM’s Arjun Ramachandran plots this evolution and discusses how finding skilled people has become the enduring modern challenge for the privacy profession. We also preview elevenM’s upcoming webinar.

In his musings over privacy, Aristotle could hardly have contemplated the complex challenges that would be heralded by the digital future.

The ancient Greek philosopher drew a distinction between private family life (oikos) and the public sphere of political affairs (polis). While this conception of the division between domestic life and the state remains valid, the landscape today is so much more complex.

Our lives today play out and are made meaningful by our existence in an intricate web of relationships – with other people, with businesses and with information – largely mediated through our daily use of various digital services and participation in online platforms.

The modern privacy challenge – as distinct from Aristotle’s paradigm where privacy was more synonymous with domestic life – is perhaps to locate and define privacy within this more complex picture. In this context, our experiences of privacy are increasingly not simply the result of our personal choices (such as deciding to remain in the privacy of the family home). Instead, they’re dictated by how this broader digital and information ecosystem – one in which we all must necessarily participate – is engineered.

Banks, government agencies, streaming services, social media platforms … and all manner of services have now become, by default, digital businesses. So it is that the digital platforms we use to communicate and the organisations with which we share our information have become outsized gatekeepers of our privacy.

We know that there is strong community sentiment in favour of privacy which provides direction for businesses seeking as us customers. Regulations such as The Privacy Act (1988) and the EU’s GDPR also set legal baselines for how privacy must be protected. But realising these outcomes ultimately falls to these organisations and how they collectively handle our information.

In our work with many of these companies, we’ve seen growing intent and motivation over recent years to get privacy right. There has been steady investment in establishing dedicated privacy teams and building privacy processes and programs.

But when their privacy capability reaches a certain point of maturity, many organisations appear to hit the same wall: an inability to find skilled professionals to keeping doing the good work of privacy. “The global shortage of privacy professionals” has sadly become a well-documented state of affairs.

The challenge will only intensify in coming years as digitisation expands, with growing use of data-driven technologies like machine learning and artificial intelligence. This is to say nothing of the review of the Privacy Act currently underway, and the expanded compliance requirements it will bring.

At elevenM, we’ve been talking about this problem at length amongst ourselves and with some of our industry colleagues. We’d love to open up this problem to a broader audience – such is its breadth and criticality that we believe it requires a truly collective approach.

On February 8, elevenM’s Melanie Marks and Jordan Wilson-Otto, in partnership with the International Association of Privacy Professionals, will deliver a presentation diving deeper into the talent drought and exploring solutions. The presentation will be followed by a multidisciplinary panel discussion featuring leaders in privacy, academia and industry.

If you’d like to attend, click the link below to register. https://iapp.org/store/webconferences/a0l1P00000DbKYuQA


A Lawyer, a Chemist, an Ethicist, a Copywriter and a Programmer Walk Into a Bar

Presenters:
Melanie Marks, CIPP/E, CIPM, FIP, Principal, ElevenM
Jordan Wilson-Otto, Principal, ElevenM

Panellists:
Chantelle Hughes, Executive Manager, Technology Community, Commonwealth Bank
Sacha Molitorisz, Lecturer, Centre for Media Transition, University of Technology Sydney
Jacqui Peace, AU/NZ Privacy Management, Google
Daimhin Warner, CIPP/E, Principal & Director, Simply Privacy

Pete was right (but just this time)

By Arjun Ramachandran

Pete’s a hardened security professional. He’s been in this game for over 20 years and has the battle scars of many cyber uplift and remediation programs. He feels the pain of CISOs fighting the good fight.

I’m a former journo. Even though I’m no longer actively reporting, the craft matters to me and I get defensive and outraged in equal measures about the quality of print news.

Pete and I butted heads in recent days over the reporting of the NSW Auditor-General’s (AG) report into Transport for NSW and Sydney Trains.

The AG report, and much of the media coverage, was overtly critical. A “litany of cyber security weaknesses [identified in] a scathing review” was how the SMH described it.

Pete wasn’t overly happy about the coverage, feeling both the media reporting and the AG report to be unfair, particularly in the context of how organisations actually go about improving their cyber posture (more on this later).

While I defended the reporting as – for the most part – being a fair and straight write-up of the AG report, I have to concede that on the bigger point Pete was dead right.

There’s a problem in our industry, and in broader society, with how we often talk about and respond to reviews of organisations’ security programs. It’s not that we’re quick to talk about deficiencies that don’t exist or that aren’t serious, but that the way these reviews are presented drives a response and conversation that is counterproductive to good security outcomes.

There are no perfect security programs, or organisations with perfect security. The threat landscape changes daily and security uplifts require complex whole-of-organisation changes that necessarily take a long time. Even in the most mature organisations, “good security” is a work in progress, with gaps continuously identified and addressed. Any detailed review will find holes and inadequacies.

To be clear, this is not an argument to take lightly the adverse findings of a review. Arguably, these findings are a large part of why we do reviews, so that a measured and constructive response to them can lead to improved security.

But too often in our journeys at elevenM we see instances where the supercharged or out-of-proportion delivery of adverse findings leads to an equally supercharged response (sometimes in the form of its own large remediation program) that sees a sizeable redirection of resources, and ultimately the deferral or interruption of well-considered strategies or critical uplift projects.

We found it particularly uncomfortable that the AG report was based on a red team exercise. A red team exercise – where “authorised attackers” (in the words of the AG report) are given permission to try and break into systems – will always find security flaws. These exercises are typically conducted expressly to provide insights to security teams that they can learn from. To broadly publish those findings in the tone and manner that the AG report has done didn’t strike us as constructive.

News round-up March 2021 — That horrible Exchange compromise, IOT security threats made real and digital platforms’ latest privacy challenges

Helping your business stay abreast and make sense of the critical stories in digital risk, cyber security and privacy. Email news@elevenM.com.au to subscribe.

The round-up

“But has the horse has already bolted?” That’s the question senior US officials want companies who’ve applied patches for the highly publicised Microsoft Exchange security breach to ask themselves. The ugly Exchange Server compromise headlines our round-up, which also features an IoT breach that snared businesses across a range of industries, and the latest ransomware tactics.

Key articles

Thousands of Exchange servers breached prior to patching, CISA boss says

Summary: Four previously unidentified vulnerabilities in the Microsoft Exchange Server have been exploited by state-sponsored actors operating out of China, with some reports citing as many as 60,000 organisations affected.

Key risk takeaway: Being patched against these vulnerabilities might be giving system administrators a false sense of confidence. Having observed numerous concerted attempts to exploit the flaws, US officials are urging companies to take aggressive action to investigate and remediate compromises that may already have occurred (before patching). Accordingly, in addition to moving fast to release patches, Microsoft has published detailed guidance on its website on how to investigate and remediate the vulnerabilities, and even developed a “one-click mitigation tool” for organisations with smaller or less-resourced security teams. To learn more about how to develop a comprehensive vulnerability management program to drive timely remediation of dangerous security flaws (noting once again that patching alone may be insufficient in Exchange incident), check out our recent blog series here.

#vulnerabilitymanagement #statesponsoredattack


Directors must face cyber risks

Summary: Directors of public firms are expected to soon face greater accountability from cyber risks under the Government’s cyber strategy.

Key risk takeaway: Lack of preparation for cyber risks by boards may soon be punishable, as the Government seeks to make changes to directors’ duties in the second half of 2021. The Government is light on details but has cited preventing customer credentials from ending up on the dark web as a potential example of these new obligations. The introduction of these obligations follows the imposition of director duties on directors of financial institutions by APRA’s Prudential Standard 234. The moves are also part of a broader push for the Defence Department to take more forceful steps to “step in and protect” critical infrastructure companies, even if they are in the private sector.

#cyber #APRA #regulations


Hackers say they’ve gained access to surveillance cameras in Australian childcare centres, schools and aged care

Summary: Hacktivists gained access to approximately 150,000 Verkada surveillance cameras around the world after finding the username and password for an administrator account publicly exposed on the internet.

Key risk takeaway: This incident is not only a concrete example of oft-described potential security risks of IOT (not to mention the implications of poor password management). It also highlights that risks and impacts from these devices may be felt differently across a variety of sectors. For example, uncomfortable regulatory conversations could arise for some of Verkada’s clients (which include childcare centres and aged-care facilities), given the cameras have built-in facial recognition technology and can be placed in sensitive locations. This incident also highlights ongoing challenges for organisations in achieving effective security assurance over their supply chains, especially cloud-based suppliers.

#cybersecurity #IOT #suppliersecurity


Universal Health Services reports $67 million in losses after apparent ransomware attack

Summary: Universal Health Services (UHS) has reported losing US$67 million from the September ransomware attack that affected a large range of systems.

Key risk summary: The serious financial implications of ransomware continue to be apparent, with UHS’ heavy losses comprising both lost revenue and increased labour costs. Meanwhile Finnish psychology service Vastaamo, whose ransomware challenges we described in October, has now filed for bankruptcy. In a mark of how lucrative ransomware has become,  ransomware operators reportedly pulled in $370 million in profits last year. Still, techniques continue to evolve. Researchers recently observed attackers breaching ‘hypervisor servers’ (which organisations use to manage virtual machines). Doing this allows attackers to encrypt all virtual machines on a system, increasing pressure on victim organisations to pay a ransom. In the face of the continued evolution of ransomware, Australia’s Federal Labor Opposition has now called for a national ransomware strategy comprising a variety of measures including regulations, law enforcement, sanctions, diplomacy, and offensive cyber operations. Some of the thinking in the strategy – e.g. around enforcement and sanctions – also aligns with recent expert calls for a global effort to create a new international collaboration model to tackle ransomware.

#ransomware #cybersecurity #costofdatabreach


WhatsApp tries again to explain what data it shares with Facebook and why

Summary: WhatsApp deferred the introduction of new privacy terms in order to buy time to better explain the change.

Key risk takeaway: This is one of many recent examples that show us it is no longer sufficient for online services to have a “take it or leave it” attitude in their privacy terms. Having first taken such an approach with its revised privacy terms, WhatsApp had to scramble to explain the changes after “tens of millions of WhatsApp users started exploring alternatives, such as Signal and Telegram”. More broadly, a recent New York Times editorial also argued that current consent models and the default practice requiring consumers to opt-out of data collection practices undermines privacy and must change. In our recent blog post we explore in detail the adequacy of current approaches to consent, which is being examined under the current review of the Australian Privacy Act.

#privacy #consent


TikTok reaches $92 million settlement over nationwide privacy lawsuit

Summary: TikTok agreed to settle 21 combined class-action lawsuits over invasion of privacy for US $92million.

Key risk takeaway: Disregarding appropriate privacy measures will have financial consequences – whether that’s through regulatory fines, legal settlements (as is the case here) or the long-term erosion of user trust. Complaints from the lawsuits against TikTok alleged a range of issues, from using facial analysis to determine users’ ethnicity, gender, and age to illegal transmissions of private data. And just as TikTok said it didn’t want to take the time to litigate the complaints, it was also rated one of the least trusted digital platforms. Privacy responsiveness and social responsibility from digital platforms are fast becoming market differentiators, with 62% of Americans saying search and social media companies need more regulation.

#privacy #transparency #trust