Privacy in focus: A pub test for privacy

In this instalment of our ‘Privacy in focus’ blog series, we look beyond consent and explore other ideas that could make privacy easier and more manageable.

In our last post, without mentioning milkshakes, we talked about how central consent has become in the regulation of privacy and how putting so much weight on individuals’ choices can be problematic.

This time, we’re into solution mode. How can we make privacy choices easier? How might we start moving away from consent as the touchstone? What might privacy law look like if it didn’t rely so heavily on individuals to monitor and control how their information is used?

Start where you are

It is likely that notice and consent will always be a critical piece of the privacy puzzle, so before we start talking about evolving our entire regulatory model, we should probably cover what might be done to improve our current approach.

Last time, we identified four related ways in which individual privacy choices get compromised:

  • we don’t have enough time
  • we don’t have enough expertise
  • we behave irrationally
  • we are manipulated by system designers.

So what can we do to address these shortcomings?

We can raise the bar — rule out reliance on consent in circumstances where individuals are rushed, do not understand or have not truly considered their options, or in truth do not have any options at all.[1] Raising the bar would rule out a range of practices that are currently commonplace, such as seeking consent for broad and undefined future uses, or where there is a substantial imbalance of power between the parties (such as in an employment relationship). It would also rule out what is currently a very common practice of digital platforms — requiring consent to unrelated secondary uses of personal information (such as profiling, advertising and sale) as a condition of service or access to the platform.

We can demand better designclearer, shorter and more intelligible privacy communications, perhaps even using standardised language and icons. Apple’s recently adopted privacy ‘nutrition labels’ for iPhone apps are a great example of what this can look like in practice, but we needn’t stop there — there is a whole field of study in legal information design and established practices and regulatory requirements from other industries (such as financial services product disclosures) which could be drawn on.

We can ban specific bad practicesmanipulative and exploitative behaviours that should be prohibited. Australian Consumer Law goes some way to doing this already, for example by prohibiting misleading and deceptive conduct (as the ACCC’s recent victory against Google will attest). But we could go further, for example by following California in specifically prohibiting ‘dark patterns’ — language, visual design, unnecessary steps or other features intended to push users into agreeing to something they wouldn’t otherwise agree to. Another, related option is to require privacy protective default settings to prevent firms from leveraging the default effect to push users towards disclosing more than they would like.

Who should take responsibility for safety?

But even if we did all of the above (and we should), taking responsibility for our own privacy in a world that is built to track our every move is still an impossibly big ask. Instead of expecting individuals to make the right choices to protect themselves from harmful data practices, the Privacy Act should do more to keep people safe and ensure organisations do the right thing.

What would that look like in practice? Focusing on organisational accountability and harm prevention would mean treating privacy a bit more like product safety, or the safety of our built environment. In these contexts, regulatory design is less about how to enable consumer choice and more about identifying who is best equipped to take responsibility for the safety of a thing and how best to motivate that party to do so.

Without strict safety requirements on their products, manufacturers and builders may be incentivised to cut corners or take unnecessary risks. But for the reasons we’ve already discussed, it doesn’t make sense to look to consumers to establish or enforce these kinds of requirements.

Take the safety of a children’s toy, for example. What is more likely to yield the optimal outcome – having experts establish standards for safety and quality (eg: for non-toxic paints and plastics, part size, etc) which manufacturers must meet to access the Australian market, and against which products will be tested by a well-resourced regulator? Or leaving it to the market and having every individual, time-poor consumer assess safety for themselves at the time of purchase, based on their limited knowledge of the product’s inner workings?

Whenever we can identify practices that are dangerous or harmful, it is far more effective and efficient to centralise responsibility in the producer and establish strong, well-funded regulators to set and check safety standards. We don’t expect individual consumers to check for themselves whether the products they buy are safe to use, or whether a building is safe to enter.

Why should privacy be any different?

Just like with buildings or physical goods, we should be able to take a certain level of safety for granted with respect to our privacy. Where a collection, use or disclosure of personal information is clearly and universally harmful, the Privacy Act should prohibit it. It should not fall to the user to identify and avoid or mitigate that harm.

Privacy laws in other jurisdictions do this. Canada, for example, requires any handling of personal information to be ‘for purposes that a reasonable person would consider appropriate in the circumstances. In Europe under the GDPR, personal data must be processed ‘fairly’. Both requirements have the effect of prohibiting or restricting the most harmful uses of personal information.

However, under our current Privacy Act, we have no such protection. There’s nothing in the Privacy Act that would stop, for example, an organisation publishing personal information, including addresses and photos, to facilitate stalking and targeting of individuals (provided they collected the information for that purpose). Similarly, there’s nothing in the Privacy Act that would stop an organisation using personal information to identify and target vulnerable individuals with exploitative content (such as gambling advertising).[2] The APPs do surprisingly little to prohibit unfair or unreasonable use and disclosure of personal information, even where it does not meet community expectations or may cause harm to individuals.

A pub test for privacy

It is past time that changed. We need a pub test for privacy. Or more formally, an overarching requirement that any collection, use or disclosure of personal information must be fair and reasonable in all the circumstances.

For organisations, the burden of this new requirement would be limited. Fairness and reasonableness are well established legal standards, and the kind of analysis required — taking into account the broader circumstances surrounding a practice such as community expectations and any potential for harm — is already routinely conducted in the course of a Privacy Impact Assessment (a standard process used in many organisations to identity and minimise the privacy impacts of projects). Fairness and reasonableness present a low bar, which the vast majority of businesses and business practices clear easily.

But for individuals, stronger baseline protections present real and substantial benefits. A pub test would rule out the most exploitative data practices and provide a basis for trust by shifting some responsibility for avoiding harm onto organisations. This lowers the level of vigilance required to protect against everyday privacy harms — so I don’t need to read a privacy policy to check whether my flashlight app will collect and share my location information, for example. It also helps to build trust in privacy protections themselves by bringing the law closer into line with community expectations — if an act or practice feels wrong, there’s a better chance that it will be.

The ultimate goal

The goal here — of both consent reforms and a pub test — is make privacy easier for everyone. To create a world where individuals don’t need to read the privacy policy or understand how cookies work or navigate complex settings and disclosures just to avoid being tracked. Where we can simply trust that the organisations we’re dealing with aren’t doing anything crazy with our data, just as we can trust that the builders of a skyscraper aren’t doing anything crazy with the foundations. And to create a world where this clearer and globally consistent set of expectations also makes life easier for organisations.

These changes are not revolutionary, and they might not get us to that world immediately, but they are an important step along the path, and similar measures have been effective in driving better practices in other jurisdictions.

The review of the Privacy Act is not only an opportunity to bring us back in line with international best practice, but also an opportunity to make privacy easier and more manageable for us all.


Read all posts from the Privacy in focus series:
Privacy in focus: A new beginning
Privacy in focus: Who’s in the room?
Privacy in focus: What’s in a word?
Privacy in focus: The consent catch-22
Privacy in focus: A pub test for privacy

 


[1] In it’s submission on the issues paper, the OAIC recommends amending the definition of consent to require ‘a clear affirmative act that is freely given, specific, current, unambiguous and informed’.

[2] These examples are drawn from the OAIC’s submission to the Privacy Act Review Issues Paper – see pages 84-88.

News round-up July 2020 — European court decision on international data transfers, software vulnerabilities, and more

Helping your business stay abreast and make sense of the critical stories in digital risk, cyber security and privacy. Email news@elevenM.com.au to subscribe.

 

The round-up

This month saw some big plays in the world of privacy – most notably the striking down by a European Court of a mechanism for international data transfers. We look at the implications for Australia organisations coming out of the judgement. This month we’re also reminded of the inherent vulnerability of software via stories about backdoors in Chinese tax software, a flood of critical patches released for popular enterprise software products and, of course, more yarns about ransomware.

Key articles:

Chinese bank requires foreign firm to install app with covert backdoor

Summary: Tax software required to be used by organisations that conduct business in China has been found to have been infected with malware.

Key risk takeaway: This discovery by security researchers is a cautionary tale for any business with operations in China. Dubbed “GoldenSpy”, the backdoor in the tax software reportedly allowed the remote execution of commands on infected computers. A similar backdoor was later discovered in the other of the two Chinese-government authorised tax software products. Concerns have long been raised about the invasive security provisions levelled at western businesses by China, though the covert nature of this incursion is rather more sinister. The FBI warns that companies in healthcare, chemical and finance sectors are in particular danger. Echoing the FBI’s advice, businesses should ensure they patch critical vulnerabilities on their systems, monitor applications for unauthorised access and protect accounts through multi-factor authentication.

Tags: #cyberhygiene #cyberespionage

 

Europe’s top court strikes down flagship EU-US data transfer mechanism

Summary: The EU-US Privacy Shield, a key framework for regulating transatlantic data transfers, has been declared invalid by the Court of Justice of the European Union with immediate effect. Alternative international data transfer mechanisms remain valid subject to additional obligations imposed upon companies.

Key risk takeaway: Though primarily focused on transatlantic transfers, the Court’s judgement will also give pause to Australian organisations that use Standard Contractual Clauses (SCCs), a key tool for Australia-EU data transfers. Whilst confirming that SCCs remain a valid means for international data transfers under the GDPR, the Court’s judgement imposes an onus on companies relying on SCCs to undertake case-by-case determinations on whether foreign protections are adequate under EU standards and whether additional safeguards are required.

Tags: #privacy #GDPR

 

Apple Just Crippled IDFA, Sending An $80 Billion Industry Into Upheaval

Summary: Apple’s shift to requiring opt-in consent for IDFAs, a unique identifier which enables advertisers to track user behaviour across apps for targeting and attribution purposes, threatens to upend the mobile advertising ecosystem.

Key risk takeaway: Apple continues to brandish its privacy-centric approach as a key competitive asset and brand differentiator. This latest move was announced alongside a series of privacy-conscious updates and has been celebrated by privacy advocates as a fundamental step towards greater user transparency and control over use of their data. The change involves users now receiving explicit prompts requiring opt-in consent, as opposed to these controls being buried within Apple’s settings. The update has particular implications for both Facebook and Google, whose ad-tech services depend on aggregating large troves of data with IDFAs. Meanwhile, in another fillip for privacy advocates this month, the public broadcaster in the Netherlands has published data showing that it grew ad revenue after ditching ad trackers and moving to contextual ads.

Tags: #privacy #trust

 

Twitter partially shut down as hackers compromise 45 high profile accounts

Summary: In a coordinated attack, hackers gained control of dozens of high-profile Twitter accounts, including former US President Barack Obama, US presidential candidate Joe Biden, Amazon CEO Jeff Bezos, Elon Musk, Apple and many others.

Key risk takeaway: While the hackers’ motivations here appear to have been rather benign (to propagate a bitcoin scam message), their unprecedented access could have had much more serious consequences. Imagine a nation state with full control of these compromised accounts, intent on derailing an election. It should raise the question for any organisation – what damage could a hacker do with access to your internal tools? The methods behind the attack were also relatively standard: social engineering to gain access to an internal customer support tool, which they used to reset account passwords. No zero-day cyber-gymnastics here. The obvious lesson here is ‘back to basics’ – training and awareness and restricted privileges. The deeper concern is how Twitter and other social media have become so central to our democracies – failures of this kind cannot be allowed to happen.

Tags: #socialengineering #databreaches #geopolitics

 

2020 is on track to hit a new data breach record

Summary: Troy Hunt’s ‘Have I been Pwned’ database reaches 10 billion records, while a new report estimates that 8.4 billion records were exposed in the first quarter of 2020 alone.

Key risk takeaway: The internet is now awash with compromised credentials, making password re-use a greater threat than ever. If you’ve already used a given password before, the likelihood is it’s now out there somewhere and can be used to compromise your account. This threat to account security is compounded by the continued rise of phishing and social engineering attacks, particularly in the new COVID-19 normal. The rapid switch to remote working combined with the uncertainty of the pandemic have given rise to effective new phishing lures such as fake pandemic updates or notifications from popular remote working applications. And so, the parade of data breaches continues. From dating apps, to hotel chainsairlinestelcos and many others, news of data breaches have become part of the background hum of our industry.

Tags: #databreaches

 

Garmin confirms ransomware attack, keeps quiet on possible Evil Corp. involvement

Summary: Garmin said while there was no indication attackers accessed customer data, the attack did interrupt website functionality, customer support services, user apps and corporate communications. This was again one of many ransomware attacks this month.

Key risk takeaway: This particular attack draws attention to the incredibly precarious position ransomware victims find themselves in regarding ransoms. Enduring widespread disruption to services due to WastedLocker ransomware, Garmin reportedly was faced with a US$10 million ransom to decrypt its files. Reports also claim that Russian gang Evil Corp was behind the attack. The gang’s members have been sanctioned by the US government, making any dealings with them illegal. Services are now back online and Garmin has not confirmed whether it paid the ransom. We also learned this month that ransomware gangs are a patient bunch – spending long periods of time within the networks they have breached in order to gather as much information as possible to maximise leverage in ransom demands.

Tags: #ransomware

 

US cyber officials urge patching of bug affecting up to 40K SAP customers

Summary: A critical vulnerability in SAP applications could affect up to 40,000 customers.

Key risk takeaway: Patch your critical systems! The last month has seen a rash of patches released for serious vulnerabilities in widely used systems. In addition to the SAP bug, software company Citrix announced yet more bugs (but with fixes), as did Microsoft, Palo Alto Networks and F5 Networks products. Respected guidance such as the Australian Government’s Essential Eight strategies recommends timely patching as a foundational security practice. In practice, many organisations struggle to prioritise the many, many security fixes that increasingly require acting on. The last month will only have further compounded the headaches of systems administrators (and likely intensified their pleas for more attention to secure coding practices).

Tags: #vulnerabilitymanagement

Happy birthday Notifiable Data Breaches Scheme. How have you performed?

A year ago today, Australian businesses became subject to a mandatory data breach reporting scheme. Angst and anticipation came with its introduction – angst for the disruption it might have on unprepared businesses and anticipation of the positive impact it would have for privacy.

Twelve months on, consumers are arguably more troubled about the lack of safeguards for privacy, while businesses face the prospect of further regulation and oversight. Without a fundamental shift in how privacy is addressed, the cycle of heightened concern followed by further regulation looks set to continue.

It would be folly to pin all our problems on the Notifiable Data Breaches (NDB) scheme. Some of the headline events that exacerbated community privacy concerns in the past year fell outside its remit. The Facebook / Cambridge Analytica scandal stands out as a striking example.

The NDB scheme has also made its mark. For one, it has heralded a more transparent view of the state of breaches. More than 800 data breaches have been reported in the first year of the scheme.

The data also tells us more about how breaches are happening. Malicious attacks are behind the majority of breaches, though humans play a substantial role. Not only do about a third of breaches involve a human error, such as sending a customer’s personal information to the wrong person, but a large portion of malicious attacks directly involve human factors such as convincing someone to give away their password.

And for the most part, businesses got on with the task of complying. In many organisations, the dialogue has shifted from preventing breaches to being well prepared to manage and respond to them. This is a fundamentally positive outcome – as data collection grows and cyber threats get more pernicious, breaches will become more likely and businesses, as they do with the risk of fire, ought to have plans and drills to respond effectively.

And still, the jury is out on whether consumers feel more protected. Despite the number of data breach notifications in the past year, events suggest it would be difficult to say transparency alone had improved the way businesses handle personal information.

The sufficiency of our legislative regime is an open question. The ACCC is signalling it will play a stronger role in privacy, beginning with recommending a strengthening of protections under the Privacy Act. Last May, the Senate also passed a motion to bring Australia’s privacy regime in line with Europe’s General Data Protection Regulation (GDPR), a much more stringent and far-reaching set of protections.

Australian businesses ought not be surprised. The Senate’s intent aligns to what is occurring internationally. In the US, where Facebook’s repeated breaches have catalysed the public and polity, moves are afoot towards new federal privacy legislation. States like California have already brought in GDPR-like legislation, while Asian countries are similarly strengthening their data protection regimes. With digital protections sharpening as a public concern, a federal election in Australia this year further adds to the possibility of a strengthened approach to privacy by authorities.

Businesses will want to free themselves of chasing the tail of compliance to an ever-moving regulatory landscape. Given the public focus on issues of trust, privacy also emerges as a potential competitive differentiator.

A more proactive and embedded approach to privacy addresses both these outcomes. Privacy by design is emerging as a growing discipline by which privacy practices are embedded at an early stage. In short, with privacy in mind at an early stage, new business initiatives can be designed to meet privacy requirements before they are locked into a particular course of action.

We also need to look to the horizon, and it’s not as far away as we think. Artificial intelligence (AI) is already pressing deep within many organisations, and raises fundamental questions about whether current day privacy approaches are sufficient. AI represents a paradigm shift that challenges our ability to know in advance why we are collecting data and how we intend to use it.

And so, while new laws introduced in the past 12 months were a major step forward in the collective journey to better privacy, in many ways the conversation is just starting.

Our thoughts on the year ahead

At elevenM, we love shooting the breeze about all things work and play. We recently got together as a team to kick off the new year, share what we’d been up to and the thoughts inspiring us as we kick off 2019. Here’s a summary…

Early in the new year, under a beating sun at the Sydney Cricket Ground, our principal Arjun Ramachandran found himself thinking about cyber risk.

“Indian batsman Cheteshwar Pujara was piling on the runs and I realised – ‘I’m watching a masterclass in managing risk’. He’s not the fanciest or most talented batsman going around, but what Pujara has is total command over his own strengths and weaknesses. He knows when to be aggressive and when to let the ball go. In the face of complex external threats, I was struck by how much confidence comes from knowing your own capabilities and posture.”

A geeky thought to have at the cricket? No doubt. But professional parallels emerge when you least expect them. Particularly after a frantic year in which threats intensified, breaches got bigger, and major new privacy regulations came into force.

Is there privacy in the Home?

Far away from the cricket, our principal Melanie Marks was also having what she describes as a “summer quandary”. Like many people, Melanie this summer had her first extended experience of a virtual assistant (Google Home) over the break.

“These AI assistants are a lot of fun to engage with and offer endless trivia, convenience and integrated home entertainment without having to leave the comfort of the couch,” Melanie says. “However, it’s easy to forget they’re there and it’s hard to understand their collection practices, retention policies and deletion procedures (not to mention how they de-identify data, or the third parties they rely upon).”

Melanie has a challenge for Google in 2019: empower your virtual assistant to answer the question: “Hey Google – how long do you keep my data?” as quickly and clearly as it answers “How do you make an Old Fashioned?”.

Another of our principals and privacy stars Sheila Fitzpatrick has also been pondering the growing tension between new technologies and privacy. Sheila expects emerging technologies like AI and machine learning to keep pushing the boundaries of privacy rights in 2019.

“Many of these technologies have the ‘cool’ factor but do not embrace the fundamental right to privacy,” Sheila says. “They believe the more data they have to work with, the more they can expand the capabilities of their products without considering the negative impact on privacy rights.”

The consumer issue of our time

We expect to see the continued elevation of privacy as a public issue in 2019.  Watch for Australia’s consumer watchdog, the Australian Competition and Consumer Commission, to get more involved in privacy, Melanie says. The ACCC foreshadowed in December via its preliminary report into digital platforms.

Business will also latch onto the idea of privacy as a core consumer issue, says our Head of Product Development Alistair Macleod. Some are already using it as a competitive differentiator, Alistair notes, pointing to manufacturers promoting privacy-enhancing features in new products and Apple’s hard-to-miss pro-privacy billboard at the CES conference just this week.

We’ll also see further international expansion of privacy laws in 2019, Sheila says. Particularly in Asia Pacific and Canada, where some requirements (such as around data localisation) will even exceed provisions under GDPR, widely considered a high watermark for privacy when introduced last May.

Cyber security regulations have their turn

But don’t forget cyber security regulation. Our principal Alan Ligertwood expects the introduction of the Australian Prudential Regulation Authority’s new information security standard CPS 234 in July 2019 to have a significant impact.

CPS 234 applies to financial services companies and their suppliers and Alan predicts the standard’s shift to a “trust but verify” approach, in which policy and control frameworks are actually tested, could herald a broader shift to more substantive approach by regulators to oversight of regulatory and policy compliance.

There’s also a federal election in 2019. We’d be naïve not to expect jobs and national security to dominate the campaign, but the policy focus given to critical “new economy” issues like cyber security and privacy In the lead-up to the polls will be worth watching. In recent years cyber security as a portfolio has been shuffled around and dropped like a hot potato at ministerial level.

Will the Government that forms after the election – of whichever colour – show it more love and attention?

New age digital risks

At the very least, let’s hope cyber security agencies and services keep running. Ever dedicated, over the break Alan paid a visit to the National Institute of Standards and Technology’s website – the US standards body that creates the respected Cybersecurity Framework – only to find it unavailable due the US government shutdown.

“It didn’t quite ruin my holiday, but it did get me thinking about unintended consequences and third party risk. A squabble over border wall funding has resulted in a global cyber security resource being taken offline indefinitely.”

It points to a bigger issue. Third parties and supply chains, and poor governance over them, will again be a major contributor to security and privacy risk this year, reckons Principal Matt Smith.

“The problem is proving too hard for people to manage correctly. Even companies with budgets which extend to managing supplier risk are often not able to get it right – too many suppliers and not enough money or capacity to perform adequate assurance.”

If the growing use of third parties demands that businesses re-think security, our Senior Project Manager Mike Wood sees the same trend in cloud adoption.

“Cloud is the de-facto way of running technology for most businesses.  Many are still transitioning but have traditional security thinking still in place.  A cloud transition must come with a fully thought through security mindset.”

Mike’s expecting to see even stronger uptake of controls like Cloud Access Security Brokers in 2019.

But is this the silver bullet?

We wonder if growing interest in cyber risk insurance in 2019 could be the catalyst for uplifted controls and governance across the economy. After all, organisations will need to have the right controls and processes in place in order to qualify for insurance in line with underwriting requirements.

But questions linger over the maturity of these underwriting methodologies, Alan notes.

“Organisations themselves find it extremely difficult to quantify and adequately mitigate cyber threats, yet insurance companies sell policies to hedge against such an incident.”

The likely lesson here is for organisations not to treat cyber insurance as a silver bullet. Instead, do the hard yards and prioritise a risk-based approach built on strong executive sponsorship, effective governance, and actively engaging your people in the journey.

It’s all about trust

If there was a common theme in our team’s readings and reflections after the break, it was probably over the intricacies of trust in the digital age.

When the waves stopped breaking on Manly beach, Principal Peter Quigley spent time following the work of Renee DiResta, who has published insightful research into the use of disinformation and malign narratives in social media. There’s growing awareness of how digital platforms are being used to sow distrust in society. In a similar vein, Arjun has been studying the work of Peter Singer, whose research into how social media is being weaponised could have insights for organisations wanting to use social media to enhance trust, particularly in the wake of a breach.

Alistair notes how some technology companies have begun to prioritise digital wellbeing. For example, new features in Android and iOS that help users manage their screen time – and thus minimise harm – reflect the potential for a more trusting, collaborative digital ecosystem.

At the end of the day, much of our work as a team goes towards helping organisations mitigate digital risk in order to increase digital trust – among customers, staff and partners. The challenges are aplenty but exciting, and we look forward to working on them with many of you in 2019.

End of year wrap

The year started with a meltdown. Literally.

New Year’s Eve hangovers had barely cleared when security researchers announced they had discovered security flaws that would impact “virtually every user of a personal computer”. “Happy new year” to you too. Dubbed “Meltdown” and “Spectre”, the flaws in popular computer processors would allow hackers to access sensitive information from memory – certainly no small thing. Chipmakers urgently released updates. Users were urged to patch. Fortunately, the sky didn’t fall in.

If all this was meant to jolt us into taking notice of data security and privacy in 2018 … well, that seemed unnecessary. With formidable new data protection regulations coming into force, many organisations were already stepping into this year with a much sharper focus on digital risk.

The first of these new regulatory regimes took effect in February, when Australia finally introduced mandatory data breach reporting. Under the Notifiable Data Breaches (NDB) scheme, overseen by the Office of the Australian Information Commissioner, applicable organisations must now disclose any breaches of personal information likely to result in serious harm.

In May, the world also welcomed the EU’s General Data Protection Regulation (GDPR). Kind of hard to miss, with an onslaught of updated privacy policies flooding user inboxes from companies keen to show compliance.

The promise of GDPR is to increase consumers’ consent and control over their data and place a greater emphasis on transparency.  Its extra-territorial nature (GDPR applies to any organisation servicing customers based in Europe) meant companies all around the world worked fast to comply, updating privacy policies, implementing privacy by design and creating data breach response plans. A nice reward for these proactive companies was evidence that GDPR is emerging as a template for new privacy regulations around the world. GDPR-compliance gets you ahead of the game.

With these regimes in place, anticipation built around who would be first to test them out. For the local NDB scheme, the honour fell to PageUp. In May, the Australian HR service company detected an unknown attacker had gained access to job applicants’ personal details and usernames and passwords of PageUp employees.

It wasn’t the first breach reported under NDB but was arguably the first big one – not least because of who else it dragged into the fray. It was a veritable who’s who of big Aussie brands – Commonwealth Bank, Australia Post, Coles, Telstra and Jetstar, to name a few. For these PageUp clients, their own data had been caught up in a breach of a service provider, shining a bright light on what could be the security lesson of 2018: manage your supplier risks.

By July we were all bouncing off the walls. Commencement of the My Health Record (MHR) three month opt-out period heralded an almighty nationwide brouhaha. The scheme’s privacy provisions came under heavy fire, most particularly the fact the scheme was opt-out by default, loose provisions around law enforcement access to health records, and a lack of faith in how well-versed those accessing the records were in good privacy and security practices. Things unravelled so much that the Prime Minister had to step in, momentarily taking a break from more important national duties such as fighting those coming for his job.

Amendments to the MHR legislation were eventually passed (addressing some, but not all of these issues), but not before public trust in the project was severely tarnished. MHR stands as a stark lesson for any organisation delivering major projects and transformations – proactively managing the privacy and security risks is critical to success.

If not enough attention was given to data concerns in the design of MHR, security considerations thoroughly dominated the conversation about another national-level digital project – the build out of Australia’s 5G networks. After months of speculation, the Australian government in August banned Chinese telecommunications company Huawei from taking part in the 5G rollout, citing national security concerns. Despite multiple assurances from the company about its independence from the Chinese government and offers of greater oversight, Australia still said ‘no way’ to Huawei.

China responded frostily. Some now fear we’re in the early stages of a tech cold war in which retaliatory bans and invasive security provisions will be levelled at western businesses by China (where local cyber security laws should already be a concern for businesses with operations in China).

Putting aside the geopolitical ramifications, the sobering reminder for any business from the Huwaei ban is the heightened concern about supply chain risks. With supply chain attacks on the rise, managing vendor and third-party security risks requires the same energy as attending to risks in your own infrastructure.

Ask Facebook. A lax attitude towards its third-party partners brought the social media giant intense pain in 2018. The Cambridge Analytica scandal proved to be one of the most egregious misuses of data and abuses of user trust in recent memory, with the data of almost 90 million Facebook users harvested by a data mining company to influence elections. The global public reacted furiously. Many users would delete their Facebook accounts in anger. Schadenfreude enthusiasts had much to feast on when Facebook founder and CEO Mark Zuckerberg’s uncomfortably testified in front of the US Senate.

The social network would find itself under the pump on various privacy and security issues throughout 2018, including the millions of fake accounts on its platform, the high profile departure of security chief Alex Stamos and news of further data breaches.

But when it came to brands battling breaches, Facebook hardly went it alone in 2018. In the first full reporting quarter after the commencement of the NDB scheme, the OAIC received 242 data breach notifications, followed by 245 notifications for the subsequent quarter.

The scale of global data breaches has been eye-watering. Breaches involving Marriott International, Exactis, Aadhar and Quora all eclipsed 100 million affected customers.

With breaches on the rise, it becomes ever more important that businesses be well prepared to respond. The maxim that organisations will increasingly be judged not on the fact they had a breach, but on how they respond, grew strong legs this year.

But we needn’t succumb to defeatism. Passionate security and privacy communities continue to try to reduce the likelihood or impact of breaches and other cyber incidents. Technologies and solutions useful in mitigating common threats gained traction. For instance, multi-factor authentication had more moments in the sun this year, not least because we became more attuned to the flimsiness of relying on passwords alone (thanks Ye!). Security solutions supporting other key digital trends also continue to gain favour – tools like Cloud Access Security Brokers enjoyed strong momentum this year as businesses look to manage the risks of moving towards cloud.

Even finger-pointing was deployed in the fight against hackers. This year, the Australian government and its allies began to publicly attribute a number of major cyber campaigns to state-sponsored actors. A gentle step towards deterrence, the attributions signalled a more overt and more public pro-security posture from the Government. Regrettably, some of this good work may have been undone late in the year with the passage of an “encryption bill”, seen by many as weakening the security of the overall digital ecosystem and damaging to local technology companies.

In many ways, in 2018 we were given the chance to step into a more mature conversation about digital risk and the challenges of data protection, privacy and cyber security. Sensationalist FUD in earlier years about cyber-attacks or crippling GDPR compliance largely gave way to a more pragmatic acceptance of the likelihood of breaches, high public expectations and the need to be well prepared to respond and protect customers.

At a strategic level, a more mature and business-aligned approach is also evident. Both the Australian government and US governments introduced initiatives that emphasise the value of a risk-based approach to cyber security, which is also taking hold in the private sector. The discipline of cyber risk management is helping security executives better understand their security posture and have more engaging conversations with their boards.

All this progress, and we still have the grand promise that AI and blockchain will one day solve all our problems.  Maybe in 2019 ….

Till then, we wish you a happy festive season and a great new year.

From the team at elevenM.

You get an Aadhaar! You get an Aadhaar! Everybody gets an Aadhaar!

On 26 September 2018, the Supreme Court of India handed down a landmark ruling on the constitutionality of the biggest biometric identity system in the world, India’s Aadhaar system.

The Aadhaar was implemented in 2016, and has since acquired a billion registered users. It’s a 12-digit number issued to each resident of India, linked to biometrics including all ten fingerprints, facial photo and iris scans, and basic demographic data, all held in a central database. Since being implemented, it’s been turned to a variety of uses, including everything from proof of identification, tracking of government employee attendance, ration distribution and fraud reduction, entitlements for subsidies, and distribution of welfare benefits. The Aadhaar has quickly become mandatory for access to essential services such as bank accounts, mobile phone SIMs and passports.

Beyond banks and telcos, other private companies have also been eager to use to the Aadhaar, spurring concerns about private sector access to the database.

In 2012, a series of challenges were levelled at the Aadhaar, including that the Aadhaar violated constitutionally protected privacy rights.

In a mammoth 1448 page judgement, the Court made several key rulings:

  • The Court ruled that the Aadhaar system does not in itself violate the fundamental right to privacy. However, the Court specifically called out a need for a ‘robust data protection framework’ to ensure pricy rights are protected.
  • However, the Aadhaar cannot be mandatory for some purposes, including access to mobile phone services and bank accounts, as well as access to some government services, particularly education. Aadhaar-authentication will still be required for tax administration (this resolves some uncertainty from a previous ruling).
  • The private sector cannot demand that an Aadhaar be provided, and private usage of the Aadhaar database is unconstitutional unless expressly authorised by law.
  • The Court also specified that law enforcement access to Aadhaar data will require judicial approval, and any national security-based requests will require consultation with High Court justices (i.e., the highest court in the relevant Indian state).
  • Indian citizens must be able to file complaints regarding data breaches involving the Aadhaar; prior to this judgment, the ability to file complaints regarding violations of the Aadhaar Act was limited to the government authority administering the Aadhaar system, the Unique ID Authority of India.

The Aadhaar will continue to be required for many essential government services, including welfare benefits and ration distribution – s7 of the Aadhaar Act makes Aadhaar-based authentication a pre-condition for accessing “subsidy, benefits or services” by the government. This has been one of the key concerns of Aadhaar opponents – that access to essential government services shouldn’t be dependant on Aadhaar verification. There have been allegations that people have been denied rations due to ineffective implementation of Aadhaar verification, leading to deaths.

It’s also unclear whether information collected under provisions which have now been ruled as unconstitutional – for example, Aadhaar data collected by Indian banks and telcos – will need to be deleted.

As Australia moves towards linking siloed government databases and creating its own digital identity system, India’s experience with the Aadhaar offers many lessons. A digital identity system offers many potential benefits, but all technology is a double-edged sword. Obviously, Australia will need to ensure that any digital identity system is secure but, beyond that, that the Australian public trusts the system. To obtain that trust, Australian governments will need ensure the system and the uses of the digital identity are transparent and ethical – that the system will be used in the interests of the Australian public, in accordance with clear ethical frameworks. Those frameworks will need to be flexible enough to enable interfaces with the private sector to reap the full benefits of the system, but robust enough to ensure those uses are in the public interest. Law enforcement access to government databases remains a major concern for Australians, and will need to be addressed. It’s a tightrope, and it will need to be walked very carefully indeed.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.