Privacy in focus: A pub test for privacy

In this instalment of our ‘Privacy in focus’ blog series, we look beyond consent and explore other ideas that could make privacy easier and more manageable.

In our last post, without mentioning milkshakes, we talked about how central consent has become in the regulation of privacy and how putting so much weight on individuals’ choices can be problematic.

This time, we’re into solution mode. How can we make privacy choices easier? How might we start moving away from consent as the touchstone? What might privacy law look like if it didn’t rely so heavily on individuals to monitor and control how their information is used?

Start where you are

It is likely that notice and consent will always be a critical piece of the privacy puzzle, so before we start talking about evolving our entire regulatory model, we should probably cover what might be done to improve our current approach.

Last time, we identified four related ways in which individual privacy choices get compromised:

  • we don’t have enough time
  • we don’t have enough expertise
  • we behave irrationally
  • we are manipulated by system designers.

So what can we do to address these shortcomings?

We can raise the bar — rule out reliance on consent in circumstances where individuals are rushed, do not understand or have not truly considered their options, or in truth do not have any options at all.[1] Raising the bar would rule out a range of practices that are currently commonplace, such as seeking consent for broad and undefined future uses, or where there is a substantial imbalance of power between the parties (such as in an employment relationship). It would also rule out what is currently a very common practice of digital platforms — requiring consent to unrelated secondary uses of personal information (such as profiling, advertising and sale) as a condition of service or access to the platform.

We can demand better designclearer, shorter and more intelligible privacy communications, perhaps even using standardised language and icons. Apple’s recently adopted privacy ‘nutrition labels’ for iPhone apps are a great example of what this can look like in practice, but we needn’t stop there — there is a whole field of study in legal information design and established practices and regulatory requirements from other industries (such as financial services product disclosures) which could be drawn on.

We can ban specific bad practicesmanipulative and exploitative behaviours that should be prohibited. Australian Consumer Law goes some way to doing this already, for example by prohibiting misleading and deceptive conduct (as the ACCC’s recent victory against Google will attest). But we could go further, for example by following California in specifically prohibiting ‘dark patterns’ — language, visual design, unnecessary steps or other features intended to push users into agreeing to something they wouldn’t otherwise agree to. Another, related option is to require privacy protective default settings to prevent firms from leveraging the default effect to push users towards disclosing more than they would like.

Who should take responsibility for safety?

But even if we did all of the above (and we should), taking responsibility for our own privacy in a world that is built to track our every move is still an impossibly big ask. Instead of expecting individuals to make the right choices to protect themselves from harmful data practices, the Privacy Act should do more to keep people safe and ensure organisations do the right thing.

What would that look like in practice? Focusing on organisational accountability and harm prevention would mean treating privacy a bit more like product safety, or the safety of our built environment. In these contexts, regulatory design is less about how to enable consumer choice and more about identifying who is best equipped to take responsibility for the safety of a thing and how best to motivate that party to do so.

Without strict safety requirements on their products, manufacturers and builders may be incentivised to cut corners or take unnecessary risks. But for the reasons we’ve already discussed, it doesn’t make sense to look to consumers to establish or enforce these kinds of requirements.

Take the safety of a children’s toy, for example. What is more likely to yield the optimal outcome – having experts establish standards for safety and quality (eg: for non-toxic paints and plastics, part size, etc) which manufacturers must meet to access the Australian market, and against which products will be tested by a well-resourced regulator? Or leaving it to the market and having every individual, time-poor consumer assess safety for themselves at the time of purchase, based on their limited knowledge of the product’s inner workings?

Whenever we can identify practices that are dangerous or harmful, it is far more effective and efficient to centralise responsibility in the producer and establish strong, well-funded regulators to set and check safety standards. We don’t expect individual consumers to check for themselves whether the products they buy are safe to use, or whether a building is safe to enter.

Why should privacy be any different?

Just like with buildings or physical goods, we should be able to take a certain level of safety for granted with respect to our privacy. Where a collection, use or disclosure of personal information is clearly and universally harmful, the Privacy Act should prohibit it. It should not fall to the user to identify and avoid or mitigate that harm.

Privacy laws in other jurisdictions do this. Canada, for example, requires any handling of personal information to be ‘for purposes that a reasonable person would consider appropriate in the circumstances. In Europe under the GDPR, personal data must be processed ‘fairly’. Both requirements have the effect of prohibiting or restricting the most harmful uses of personal information.

However, under our current Privacy Act, we have no such protection. There’s nothing in the Privacy Act that would stop, for example, an organisation publishing personal information, including addresses and photos, to facilitate stalking and targeting of individuals (provided they collected the information for that purpose). Similarly, there’s nothing in the Privacy Act that would stop an organisation using personal information to identify and target vulnerable individuals with exploitative content (such as gambling advertising).[2] The APPs do surprisingly little to prohibit unfair or unreasonable use and disclosure of personal information, even where it does not meet community expectations or may cause harm to individuals.

A pub test for privacy

It is past time that changed. We need a pub test for privacy. Or more formally, an overarching requirement that any collection, use or disclosure of personal information must be fair and reasonable in all the circumstances.

For organisations, the burden of this new requirement would be limited. Fairness and reasonableness are well established legal standards, and the kind of analysis required — taking into account the broader circumstances surrounding a practice such as community expectations and any potential for harm — is already routinely conducted in the course of a Privacy Impact Assessment (a standard process used in many organisations to identity and minimise the privacy impacts of projects). Fairness and reasonableness present a low bar, which the vast majority of businesses and business practices clear easily.

But for individuals, stronger baseline protections present real and substantial benefits. A pub test would rule out the most exploitative data practices and provide a basis for trust by shifting some responsibility for avoiding harm onto organisations. This lowers the level of vigilance required to protect against everyday privacy harms — so I don’t need to read a privacy policy to check whether my flashlight app will collect and share my location information, for example. It also helps to build trust in privacy protections themselves by bringing the law closer into line with community expectations — if an act or practice feels wrong, there’s a better chance that it will be.

The ultimate goal

The goal here — of both consent reforms and a pub test — is make privacy easier for everyone. To create a world where individuals don’t need to read the privacy policy or understand how cookies work or navigate complex settings and disclosures just to avoid being tracked. Where we can simply trust that the organisations we’re dealing with aren’t doing anything crazy with our data, just as we can trust that the builders of a skyscraper aren’t doing anything crazy with the foundations. And to create a world where this clearer and globally consistent set of expectations also makes life easier for organisations.

These changes are not revolutionary, and they might not get us to that world immediately, but they are an important step along the path, and similar measures have been effective in driving better practices in other jurisdictions.

The review of the Privacy Act is not only an opportunity to bring us back in line with international best practice, but also an opportunity to make privacy easier and more manageable for us all.


Read all posts from the Privacy in focus series:
Privacy in focus: A new beginning
Privacy in focus: Who’s in the room?
Privacy in focus: What’s in a word?
Privacy in focus: The consent catch-22
Privacy in focus: A pub test for privacy

 


[1] In it’s submission on the issues paper, the OAIC recommends amending the definition of consent to require ‘a clear affirmative act that is freely given, specific, current, unambiguous and informed’.

[2] These examples are drawn from the OAIC’s submission to the Privacy Act Review Issues Paper – see pages 84-88.

Patch me if you can: key challenges and considerations

In this third and final post of our series on vulnerability management, elevenM’s Theo Schreuder explores some of the common challenges faced by those running vulnerability management programs.

In our experience working with clients, there are some recurring questions that present themselves once vulnerability management programs are up and running. We outline the main ones here, and propose a way forward.

Challenge 1: Choosing between a centralised or decentralised model

Depending on the size of your organisation, a good vulnerability management program may be harder or easier to implement. In a smaller organisation it usually falls to a single security function within the IT team to provide management of vulnerabilities. This makes it easy to coordinate and prioritise remediation work and perform evaluation for exemptions.

However, in larger organisations, having individual systems teams all trying to manage and report on their vulnerabilities makes it difficult to manage vulnerabilities in a holistic way. In these scenarios, a dedicated and centralised vulnerability management team is necessary to provide governance over the entire end-to-end cycle. This team should be responsible for running scans and providing expertise on assessment of vulnerabilities as well as providing holistic reporting to management and executives.

The benefit of a dedicated vulnerability management team is that there is a single point of contact for information about all the vulnerabilities in the organisation.

Challenge 2: Ensuring risk ownership

To avoid cries of “not my responsibility” or “I have other things to do” it is important to establish who owns the risk relating to different assets and domains in the organisation, and therefore who is responsible for driving the remediation of vulnerabilities. Without a clear definition of responsibilities and procedures it is easy to get bogged down in debates over responsibilities for carrying out remediation work, rather than proceeding with the actual doing of the remediation work and securing of the network.

Furthermore, in our experience there are often different responsibilities with regards to who patches what in an organisation. As mentioned in our previous post, often there is a distinction between who is responsible for patching of below base (system level) vulnerabilities and for above base (application level) vulnerabilities. If these distinctions, and the ownership of risk across these distinctions, are not clearly defined then the patching of some vulnerabilities can fall through the cracks.

Challenge 3: Driving risk-based remediation

The importance of having an organisation-wide critical asset register cannot be overstated. From the point of view of individual asset owners, their own application is the most critical….to them. It is important to take an approach that measures the risk  of an asset being exploited or becoming unavailable in terms of the business as a whole, and not just in terms of the team that uses the device.

In the same way, security risks, mitigating controls and network exposure must be taken into account. From a risk perspective, an air gapped payroll system behind ten thousand firewalls would not be as critical as an internet-facing router that has no controls in place and a default password that allows a hacker access into your network. Hackers don’t care so much about the function of a device if it allows them access to everything else on your network.

To recap …
We hope you enjoyed the series on vulnerability management. For a refresher, you can find links to all the posts in the series at the bottom of the article. In the meantime, here are our 5 top steps for a good vulnerability management program:

  1. Get visibility quickly – scan everything and tailor reports to different audiences.
  2. Centralise your vulnerability management function – provides a holistic picture of risk to your entire network and supports prioritisation.
  3. Know your critical assets – understand their exposure and prioritise their remediation.
  4. Get your house in order – have well defined and understood asset inventories, processes and risk ownership.
  5. Automate as much as possible – leverage technology to reduce the costs of lowering risk and allow you to do do more with less resources.

Read all posts in the series:
Patch me if you can: the importance of vulnerability management
Patch me if you can: the six steps of vulnerability management
Patch me if you can: key challenges and considerations

elevenM turns five

elevenM turned five this week.

I recall a stat from university that half of all small businesses fail within the first five years. I am not sure if that stat still holds true, but it is something that has stuck in my mind. Maybe that is the reason I felt the need to note this milestone having not done so for any of the previous years. The subconscious works in mysterious ways.

What I would like to do is to take this moment to thank the wonderful Melanie Marks, a better business partner I could not have dreamed, and the energetic and talented the team at elevenM for getting us here.

Lastly, I would like to take this opportunity once again to thank the clients who have supported our business over the past five years. Simply put, without you we have no business. We have not, nor will we, ever take that for granted.

Best regards

Pete

Is supplier risk management useless?

 

So here we are again. Another supply chain attack which has led to the compromise of highly sensitive computer networks. Is this the point we draw a line under supplier risk management, put hands up and say ‘too hard’? Alex Stamos, Adjunct professor at Stanford University’s Center for International Security and Cooperation and former chief security officer (CSO) at Facebook seems to think so. In a tweet following the SolarWinds compromise he said,

“Vendor risk management is an invisible, incredibly expensive and mostly useless process as executed by most companies. When decent, it happens too late in procurement.”

For those of you who follow our blogs, you will know that this is a subject we also have strong views on. It is our view that supply chain risk is something companies cannot solve on their own. We were therefore delighted to see statements in the 2020 Australian Cyber Security Strategy that help is on its way:

“The Australian Government will establish a Cyber Security Best Practice Regulation Task Force to work with businesses and international partners to consider options for better protecting customers by ensuring cyber security is built into digital products, services and supply chains.”

What this Task Force looks like outside of the conceptual, we will need to wait and see. Given recent events however, we at elevenM hope whatever the action is, that it gets delivered sooner rather than later.

2019 end of year wrap

It’s the end of another year (and another decade).

Close out your final tasks, prepare for the inevitable summer feasting, and join us as we recap five cyber security and privacy themes that captured our attention in 2019.

  1. The fractured world of cyber affairs – is cooperation fraying just as things heat up?
  2. The scourge that won’t go away – clearing bins, paving footpaths and paying ransoms: a checklist for city councils in 2019
  3. Play with your toys – on privacy regulators who are unafraid to fine
  4. An inconvenient comparison – the emerging parallels between climate change and digital issues
  5. And then a hero comes along – we pose a question: ‘who will be our Greta?’


The fractured world of cyber affairs

Let’s start on the world stage.

Did it feel like cyber security and privacy was overshadowed in 2019, without a global-scale, highly disruptive cyber-attack to catch our attention? After all, this is decade that gave us Stuxnet, the Sony Pictures breach, Cambridge Analytica and WannaCry.

In 2019, more column inches seemingly went to non-digital matters – ongoing civil dissent in Hong Kong, trade wars, and the US going perilously close to military action against Iran.

But peer a little closer and it was far from quiet on the cyber front.

In June, Israel responded to Hamas cyber-attackers with a physical strike. Hackers also caused disruption at a US power grid. Neither incident was of the scale of a “Cyber Pearl Harbour” – the kind we’re told repeatedly to fear, even here in Australia. But both were firsts of a kind –  physical retaliation to digital aggression and the first cyber disruption of the US power grid.

Then there was the cyber-attack on Australia’s Parliament, reportedly by China. Coming just months before Australia’s May Federal election, the hack raised the spectre of election interference akin to the 2016 (and 2020) US elections.

It all leaves us pondering the state of diplomacy in cyberspace in 2019. Co-operation and leadership on the global stage have arguably weakened, not just in cyber affairs but in matters of defence generally (see the prognosis on NATO).

Traditionally strong leadership from the US on cyber affairs has been under the spotlight. Key roles like that of White House cyber coordinator have been eliminated (by President Trump’s national security adviser John Bolton, who himself was eliminated as adviser by Trump halfway through 2019).

The result appears to be a strengthening of the hand of China, North Korea and Russia in discussions about how the internet will be governed. A few months ago the United Nations adopted a cybercrime resolution against the wishes of the US and civil liberty advocates.

The Australian Government’s contribution to this dialogue also came under fire from policy analysts this year. The critics decry that the future national cyber security strategy appears to have dropped its commitment to a free and open internet.

 


The scourge that won’t go away

Stepping down from the rarefied atmosphere of global affairs and nation states, let’s turn our attention to cities and towns. “All politics is local” goes the saying in the US, and in 2019 a sizeable number of cyber-attacks were too.

Baltimore, Pensacola, Atlanta and, most recently, New Orleans are just a few of the dozens of US cities and counties brought down by ransomware attacks in 2019. The attacks caused widespread disruption – halting property transactions, crippling the court system, preventing the payment of bills and costing millions of ratepayer fund in recovery costs.

The vulnerability of these US cities to ransomware is attributed to their reliance on ageing, legacy infrastructure that isn’t patched.

The spate of ransomware incidents also elevated the discussion about the merits of paying ransoms. Official advice (and some polling) comes out strongly against forking out. But hell hath no fury like a rate-payer scorned – and the pressures of explaining disrupted services to angry residents proved too onerous for many officials, with more than one city opting to pay the ransom.

Sadly, more ransomware infections and even higher ransoms are likely on the cards again in 2020. Solutions exist – both technical and human – but it appears they are not always so easily implemented.

 


Play with your toys

An inflatable pink flamingo for the pool, a USB-powered toothbrush or wifi-enabled socks – what odd trinkets and strange gadgets lie under your tree, waiting to be unwrapped on Christmas morning?

Data protection authorities got some big toys last year, like the General Data Protection Regulation (GDPR) and Notifiable Data Breaches scheme. By mid-2019 they were giving those toys a solid work out, especially the shiny new fining capabilities. The UK’s Information Commissioner’s Office (ICO) used GDPR to whack British Airways over the head with a sizeable £183 million fine for its 2018 breach. It then shot a £99m Nerf dart at Marriott for its breach in the same year.

Across the Atlantic, the Americans weren’t about to miss out on the fun. The US Federal Trade Commission warmed up by slapping a US$575 million settlement payment on Equifax for its 2017 breach. Then they fined Facebook US$5 billion (self-described as a “record-breaking” penalty) for a series of privacy violations, including the Cambridge Analytica scandal.

Closer to home, the Australian Government has just given privacy advocates an early Christmas gift by affirming its commitment to increase penalties under the Privacy Act.

 


An inconvenient comparison?

The year 2019 saw the convergence of major issues. When thousands of school children marched in support of action on climate change in September, our principal Melanie Marks noticed the links to our collective digital challenges:

I pondered why the climate rally had delivered so many to the streets now, when we have known about climate change for years?

Privacy harm is more nebulous. The potential policy issues are hard to solve for and engaging the public even more difficult.

– Melanie Marks, elevenM

Consensus, coalitions, cooperation, a need to address externalities … the ingredients for progress on climate change appear to overlap with our challenges in privacy and cyber security.

There was progress this year in establishing a standard for climate-change related financial risk disclosures. It’s a project driven by the Financial Stability Board, a G20 body that is also driving a coordinated approach to managing cyber security in the global financial system.

The premise is to make more transparent the financial risks posed by climate change. A local investor group puts it this way: “When you have the data around assets, countries and companies, you change the way you allocate capital, it changes the way you assess risks, and it ultimately changes the economy.”

The same moves towards more data and more transparency were clearly apparent this year in efforts to protect the Australian economy against digital risks. The Australian Prudential Regulation Authority’s new information security prudential standard CPS 234, which took effect in July, is a clear example of this.

“We’ll be increasingly challenging entities in this area by utilising data driven insights to prioritise and tailor our supervisory activities. In the longer term, we’ll use this information to inform baseline metrics against which APRA regulated institutions will be benchmarked and held to account for maintaining their cyber defences.”

– Geoff Summerhayes, APRA executive board member

We see the same trend playing out with businesses we work with. At executive level there’s a strong desire for better quantification of digital risks and of how they’re being managed. Non-executive directors want to see privacy and cyber security measured and articulated like other risks in their enterprise risk frameworks.

Measuring the value and return of security investments also poses a challenge. There’s been a boom in security tools and products, but we’re now hearing more from Chief Information Security Officers who want to measure and extract value from that tooling –  a problem our Senior Project Manager Mike Wood delved into earlier this year.

 


And then a hero comes along

Greta Thunberg is 2019’s person of the year. “Meaningful change rarely happens without the galvanizing force of influential individuals”, said TIME magazine’s editor-in-chief in awarding the honour.

Maybe what we’re lacking is a figurehead for privacy, someone to catalyse global opinion and press for changes in how companies handle our personal information.

Audaciously, Mark Zuckerberg looked like he was trying to claim this mantle in April, when he stood under bright lights and a large banner proclaiming “The future is private”.

Social media might have atrophied attention spans, but not so much that we’d forgotten Cambridge Analytica, or missed Facebook’s other repeated privacy scandals this year. Most people clicked ‘thumbs down’ at Zuck’s proclamation and moved on with their day.

But our champion might have emerged, just less recognisable. Lacking the chutzpah of Miss Thunberg, but still much like an earnest school kid, Rod Sims raised his hand again and again in 2019 to be privacy’s biggest stalwart.

The ACCC chairman faced off against Google and Facebook repeatedly this year, arguing that they haven’t been playing nicely. In its Digital Platforms Inquiry, the ACCC and Sims laid bare how privacy is being fundamentally undermined in the digital age.

“It’s completely not working anymore. You are not informed about what’s going on and even if you were, you’ve got no choice because your choice is getting off Google or Facebook and not many people want to do that.

“We need to modernise our privacy laws, we need proper consent…we need new definitions of what is personal data, we need an ability to erase data and we need to require the digital platforms to just tell us very clearly what data is being collected and what’s being done with it.”

– Rod Sims, ACCC Chairman

This merging of privacy and consumer issues may well be the development of 2019.

Using Australia’s highly-regarded consumer law framework to prosecute the case for privacy would add the considerable muscle of the ACCC to the efforts of the Office of the Australian Information Commissioner in standing up for the privacy rights of Australian citizens.

Happily, in its response to the inquiry, the Government last week committed to many of the ACCC’s recommendations.

These steps forward on the enforcement of privacy are welcome. It’s still useful and crucial to remind ourselves why privacy matters to begin with. On Human Rights Day this year, elevenM Senior Consultant Jordan Wilson-Otto argued that we must go beyond advocating for privacy because of its utility as competitive differentiation or as a driver of innovation. Privacy is fundamentally about guaranteeing dignity and respect and preserving that which is important to us as humans.

 

Signing off

And that’s a fitting note on which to end our thoughts for the year.

Throughout 2019, we’ve been privileged to work with terrific people from a diverse set of clients. These are people who are highly talented, well respected in their industries, and passionate about protecting their customers and staff from digital risks.

We’re grateful for the opportunities we’ve had to be part of your journeys, and look forward to continuing our conversation and collaborations in 2020.

Have a safe and joyous festive season.

The team at elevenM.

 

Solving ransomware

We’re back in Baltimore. Unfortunately not to relive Arjun’s favourite pithy one-liners from The Wire, but to talk about something from the non-fiction genre: Ransomware.

In just a few years, ransomware has gone from nothing to a multi-billion dollar industry. And it continues to grow. It’s little wonder that law enforcement are quietly holding crises summits to ask for help.

In May of this year, the City of Baltimore was hit with a ransomware attack. The ransomware used was called RobbinHood and it encrypted an estimated 10,000 networked computers. Email systems and payment platforms were taken offline. Baltimore’s property market also took a hit as people were unable to complete real estate sales.

One click away

Like most public sector technology environments, there appears to have been a mix of old and new systems on the City of Baltimore networks. Precisely because they are old, aging systems are typically unable to be “patched” or updated for known security threats, making them vulnerable.

But getting funding to replace or update computing systems is difficult, especially when you are competing with critical services like police, fire and hospitals.

Given the hard reality that many large networks will have a high volume of outdated, and therefore vulnerable, systems that are only one mouse click away from becoming infected, should we not focus more on preventing malware from propagating?

Trust

Most global corporate networks operate using a trust principal. If you are part of the same domain or group of companies you are trusted to connect to each other’s network. This has obvious benefits, but it also brings a number of risks when we consider threats like ransomware.

Strategies

There are many strategies to mitigate the risk of a ransomware outbreak. Back up your files, patch your computers and avoid opening suspicious links or attachments are commonly advised. At elevenM, we recommend these strategies, however we also work closely with our clients on an often overlooked piece of the puzzle, Active Directory. The theory being: if your network cannot be used to spread malware, your exposure to ransomware is significantly reduced.

Monitoring Active Directory for threats

To understand this in more detail, let’s go back to Baltimore. According to reports, the Baltimore attack came through a breach of the City’s Domain Controller, a key piece of the Active Directory infrastructure. This was then used to deliver ransomware to 10,000 machines. What if Balitmore’s Active Directory had been integrated with security tools that allowed it to monitor, detect, and contain ransomware instead of being used to propagate it?

Working with our clients’ and Active Director specific tools we have been able to separate and monitor Active Directory based threat indicators including:

  • Lateral movement restriction
  • Obsolete systems
  • Brute force detection
  • Anonymous users behaviour

All the pieces of the puzzle

In mitigating cyber threats, defence teams today have access to many tools and strategies. Often, there emerges a promised silver bullet to a particular threat. But the truth is that most threats will require a layered defence, involving multiple controls and core knowledge of common IT infrastructure (like Active Directory). Or to put it again in the language of the streets of Baltimore: “All the pieces matter“.

Want to hear more? Drop us a line at hello@elevenM.com