Is supplier risk management useless?

 

So here we are again. Another supply chain attack which has led to the compromise of highly sensitive computer networks. Is this the point we draw a line under supplier risk management, put hands up and say ‘too hard’? Alex Stamos, Adjunct professor at Stanford University’s Center for International Security and Cooperation and former chief security officer (CSO) at Facebook seems to think so. In a tweet following the SolarWinds compromise he said,

“Vendor risk management is an invisible, incredibly expensive and mostly useless process as executed by most companies. When decent, it happens too late in procurement.”

For those of you who follow our blogs, you will know that this is a subject we also have strong views on. It is our view that supply chain risk is something companies cannot solve on their own. We were therefore delighted to see statements in the 2020 Australian Cyber Security Strategy that help is on its way:

“The Australian Government will establish a Cyber Security Best Practice Regulation Task Force to work with businesses and international partners to consider options for better protecting customers by ensuring cyber security is built into digital products, services and supply chains.”

What this Task Force looks like outside of the conceptual, we will need to wait and see. Given recent events however, we at elevenM hope whatever the action is, that it gets delivered sooner rather than later.

2019 end of year wrap

It’s the end of another year (and another decade).

Close out your final tasks, prepare for the inevitable summer feasting, and join us as we recap five cyber security and privacy themes that captured our attention in 2019.

  1. The fractured world of cyber affairs – is cooperation fraying just as things heat up?
  2. The scourge that won’t go away – clearing bins, paving footpaths and paying ransoms: a checklist for city councils in 2019
  3. Play with your toys – on privacy regulators who are unafraid to fine
  4. An inconvenient comparison – the emerging parallels between climate change and digital issues
  5. And then a hero comes along – we pose a question: ‘who will be our Greta?’


The fractured world of cyber affairs

Let’s start on the world stage.

Did it feel like cyber security and privacy was overshadowed in 2019, without a global-scale, highly disruptive cyber-attack to catch our attention? After all, this is decade that gave us Stuxnet, the Sony Pictures breach, Cambridge Analytica and WannaCry.

In 2019, more column inches seemingly went to non-digital matters – ongoing civil dissent in Hong Kong, trade wars, and the US going perilously close to military action against Iran.

But peer a little closer and it was far from quiet on the cyber front.

In June, Israel responded to Hamas cyber-attackers with a physical strike. Hackers also caused disruption at a US power grid. Neither incident was of the scale of a “Cyber Pearl Harbour” – the kind we’re told repeatedly to fear, even here in Australia. But both were firsts of a kind –  physical retaliation to digital aggression and the first cyber disruption of the US power grid.

Then there was the cyber-attack on Australia’s Parliament, reportedly by China. Coming just months before Australia’s May Federal election, the hack raised the spectre of election interference akin to the 2016 (and 2020) US elections.

It all leaves us pondering the state of diplomacy in cyberspace in 2019. Co-operation and leadership on the global stage have arguably weakened, not just in cyber affairs but in matters of defence generally (see the prognosis on NATO).

Traditionally strong leadership from the US on cyber affairs has been under the spotlight. Key roles like that of White House cyber coordinator have been eliminated (by President Trump’s national security adviser John Bolton, who himself was eliminated as adviser by Trump halfway through 2019).

The result appears to be a strengthening of the hand of China, North Korea and Russia in discussions about how the internet will be governed. A few months ago the United Nations adopted a cybercrime resolution against the wishes of the US and civil liberty advocates.

The Australian Government’s contribution to this dialogue also came under fire from policy analysts this year. The critics decry that the future national cyber security strategy appears to have dropped its commitment to a free and open internet.

 


The scourge that won’t go away

Stepping down from the rarefied atmosphere of global affairs and nation states, let’s turn our attention to cities and towns. “All politics is local” goes the saying in the US, and in 2019 a sizeable number of cyber-attacks were too.

Baltimore, Pensacola, Atlanta and, most recently, New Orleans are just a few of the dozens of US cities and counties brought down by ransomware attacks in 2019. The attacks caused widespread disruption – halting property transactions, crippling the court system, preventing the payment of bills and costing millions of ratepayer fund in recovery costs.

The vulnerability of these US cities to ransomware is attributed to their reliance on ageing, legacy infrastructure that isn’t patched.

The spate of ransomware incidents also elevated the discussion about the merits of paying ransoms. Official advice (and some polling) comes out strongly against forking out. But hell hath no fury like a rate-payer scorned – and the pressures of explaining disrupted services to angry residents proved too onerous for many officials, with more than one city opting to pay the ransom.

Sadly, more ransomware infections and even higher ransoms are likely on the cards again in 2020. Solutions exist – both technical and human – but it appears they are not always so easily implemented.

 


Play with your toys

An inflatable pink flamingo for the pool, a USB-powered toothbrush or wifi-enabled socks – what odd trinkets and strange gadgets lie under your tree, waiting to be unwrapped on Christmas morning?

Data protection authorities got some big toys last year, like the General Data Protection Regulation (GDPR) and Notifiable Data Breaches scheme. By mid-2019 they were giving those toys a solid work out, especially the shiny new fining capabilities. The UK’s Information Commissioner’s Office (ICO) used GDPR to whack British Airways over the head with a sizeable £183 million fine for its 2018 breach. It then shot a £99m Nerf dart at Marriott for its breach in the same year.

Across the Atlantic, the Americans weren’t about to miss out on the fun. The US Federal Trade Commission warmed up by slapping a US$575 million settlement payment on Equifax for its 2017 breach. Then they fined Facebook US$5 billion (self-described as a “record-breaking” penalty) for a series of privacy violations, including the Cambridge Analytica scandal.

Closer to home, the Australian Government has just given privacy advocates an early Christmas gift by affirming its commitment to increase penalties under the Privacy Act.

 


An inconvenient comparison?

The year 2019 saw the convergence of major issues. When thousands of school children marched in support of action on climate change in September, our principal Melanie Marks noticed the links to our collective digital challenges:

I pondered why the climate rally had delivered so many to the streets now, when we have known about climate change for years?

Privacy harm is more nebulous. The potential policy issues are hard to solve for and engaging the public even more difficult.

– Melanie Marks, elevenM

Consensus, coalitions, cooperation, a need to address externalities … the ingredients for progress on climate change appear to overlap with our challenges in privacy and cyber security.

There was progress this year in establishing a standard for climate-change related financial risk disclosures. It’s a project driven by the Financial Stability Board, a G20 body that is also driving a coordinated approach to managing cyber security in the global financial system.

The premise is to make more transparent the financial risks posed by climate change. A local investor group puts it this way: “When you have the data around assets, countries and companies, you change the way you allocate capital, it changes the way you assess risks, and it ultimately changes the economy.”

The same moves towards more data and more transparency were clearly apparent this year in efforts to protect the Australian economy against digital risks. The Australian Prudential Regulation Authority’s new information security prudential standard CPS 234, which took effect in July, is a clear example of this.

“We’ll be increasingly challenging entities in this area by utilising data driven insights to prioritise and tailor our supervisory activities. In the longer term, we’ll use this information to inform baseline metrics against which APRA regulated institutions will be benchmarked and held to account for maintaining their cyber defences.”

– Geoff Summerhayes, APRA executive board member

We see the same trend playing out with businesses we work with. At executive level there’s a strong desire for better quantification of digital risks and of how they’re being managed. Non-executive directors want to see privacy and cyber security measured and articulated like other risks in their enterprise risk frameworks.

Measuring the value and return of security investments also poses a challenge. There’s been a boom in security tools and products, but we’re now hearing more from Chief Information Security Officers who want to measure and extract value from that tooling –  a problem our Senior Project Manager Mike Wood delved into earlier this year.

 


And then a hero comes along

Greta Thunberg is 2019’s person of the year. “Meaningful change rarely happens without the galvanizing force of influential individuals”, said TIME magazine’s editor-in-chief in awarding the honour.

Maybe what we’re lacking is a figurehead for privacy, someone to catalyse global opinion and press for changes in how companies handle our personal information.

Audaciously, Mark Zuckerberg looked like he was trying to claim this mantle in April, when he stood under bright lights and a large banner proclaiming “The future is private”.

Social media might have atrophied attention spans, but not so much that we’d forgotten Cambridge Analytica, or missed Facebook’s other repeated privacy scandals this year. Most people clicked ‘thumbs down’ at Zuck’s proclamation and moved on with their day.

But our champion might have emerged, just less recognisable. Lacking the chutzpah of Miss Thunberg, but still much like an earnest school kid, Rod Sims raised his hand again and again in 2019 to be privacy’s biggest stalwart.

The ACCC chairman faced off against Google and Facebook repeatedly this year, arguing that they haven’t been playing nicely. In its Digital Platforms Inquiry, the ACCC and Sims laid bare how privacy is being fundamentally undermined in the digital age.

“It’s completely not working anymore. You are not informed about what’s going on and even if you were, you’ve got no choice because your choice is getting off Google or Facebook and not many people want to do that.

“We need to modernise our privacy laws, we need proper consent…we need new definitions of what is personal data, we need an ability to erase data and we need to require the digital platforms to just tell us very clearly what data is being collected and what’s being done with it.”

– Rod Sims, ACCC Chairman

This merging of privacy and consumer issues may well be the development of 2019.

Using Australia’s highly-regarded consumer law framework to prosecute the case for privacy would add the considerable muscle of the ACCC to the efforts of the Office of the Australian Information Commissioner in standing up for the privacy rights of Australian citizens.

Happily, in its response to the inquiry, the Government last week committed to many of the ACCC’s recommendations.

These steps forward on the enforcement of privacy are welcome. It’s still useful and crucial to remind ourselves why privacy matters to begin with. On Human Rights Day this year, elevenM Senior Consultant Jordan Wilson-Otto argued that we must go beyond advocating for privacy because of its utility as competitive differentiation or as a driver of innovation. Privacy is fundamentally about guaranteeing dignity and respect and preserving that which is important to us as humans.

 

Signing off

And that’s a fitting note on which to end our thoughts for the year.

Throughout 2019, we’ve been privileged to work with terrific people from a diverse set of clients. These are people who are highly talented, well respected in their industries, and passionate about protecting their customers and staff from digital risks.

We’re grateful for the opportunities we’ve had to be part of your journeys, and look forward to continuing our conversation and collaborations in 2020.

Have a safe and joyous festive season.

The team at elevenM.

 

Solving ransomware

We’re back in Baltimore. Unfortunately not to relive Arjun’s favourite pithy one-liners from The Wire, but to talk about something from the non-fiction genre: Ransomware.

In just a few years, ransomware has gone from nothing to a multi-billion dollar industry. And it continues to grow. It’s little wonder that law enforcement are quietly holding crises summits to ask for help.

In May of this year, the City of Baltimore was hit with a ransomware attack. The ransomware used was called RobbinHood and it encrypted an estimated 10,000 networked computers. Email systems and payment platforms were taken offline. Baltimore’s property market also took a hit as people were unable to complete real estate sales.

One click away

Like most public sector technology environments, there appears to have been a mix of old and new systems on the City of Baltimore networks. Precisely because they are old, aging systems are typically unable to be “patched” or updated for known security threats, making them vulnerable.

But getting funding to replace or update computing systems is difficult, especially when you are competing with critical services like police, fire and hospitals.

Given the hard reality that many large networks will have a high volume of outdated, and therefore vulnerable, systems that are only one mouse click away from becoming infected, should we not focus more on preventing malware from propagating?

Trust

Most global corporate networks operate using a trust principal. If you are part of the same domain or group of companies you are trusted to connect to each other’s network. This has obvious benefits, but it also brings a number of risks when we consider threats like ransomware.

Strategies

There are many strategies to mitigate the risk of a ransomware outbreak. Back up your files, patch your computers and avoid opening suspicious links or attachments are commonly advised. At elevenM, we recommend these strategies, however we also work closely with our clients on an often overlooked piece of the puzzle, Active Directory. The theory being: if your network cannot be used to spread malware, your exposure to ransomware is significantly reduced.

Monitoring Active Directory for threats

To understand this in more detail, let’s go back to Baltimore. According to reports, the Baltimore attack came through a breach of the City’s Domain Controller, a key piece of the Active Directory infrastructure. This was then used to deliver ransomware to 10,000 machines. What if Balitmore’s Active Directory had been integrated with security tools that allowed it to monitor, detect, and contain ransomware instead of being used to propagate it?

Working with our clients’ and Active Director specific tools we have been able to separate and monitor Active Directory based threat indicators including:

  • Lateral movement restriction
  • Obsolete systems
  • Brute force detection
  • Anonymous users behaviour

All the pieces of the puzzle

In mitigating cyber threats, defence teams today have access to many tools and strategies. Often, there emerges a promised silver bullet to a particular threat. But the truth is that most threats will require a layered defence, involving multiple controls and core knowledge of common IT infrastructure (like Active Directory). Or to put it again in the language of the streets of Baltimore: “All the pieces matter“.

Want to hear more? Drop us a line at hello@elevenM.com

Mr Dutton, we need help with supplier risk

When we speak with heads of cyber, risk and privacy, eventually there comes a point when brows become more furrowed and the conversation turns to suppliers and the risk they pose.

There are a couple of likely triggers. First, APRA’s new CPS 234 regulations require regulated entities to evaluate a supplier’s information security controls. Second, there’s heightened awareness now in the business community that many data breaches suffered by organisations are ultimately a result of the breach of a supplier.

The problem space

Organisations today use hundreds or even thousands of suppliers for a multitude of services. The data shared and access given to deliver those services is increasingly so extensive that it has blurred the boundaries between organisation and supplier. In many cases, the supplier’s risk is the organisation’s risk.

Gaining assurance over the risk posed by a large number of suppliers, without using up every dollar of budget allocated to the cyber team, is an increasingly difficult challenge.

Assurance

To appreciate the scope of the challenge, we first need to understand the concept of “assurance”, a term not always well understood outside the worlds of risk and assurance. So let’s take a moment to clarify, using DLP (Data Loss Prevention) as an example.

To gain assurance over a control you are required to evaluate the design and operating effectiveness of that control.  APRA’s new information security regulation CPS234 states that regulated entities require both when assessing the information security controls they rely upon to manage their risk, even if that control sits with a supplier. So what would that entail in this example?

  • Design effectiveness would be confirming that the DLP tool covered all information sources and potential exit points for your data. It would involve making sure data is marked and therefore could be monitored by the tool. Evidence of the control working would be kept.
  • Operating effectiveness would be the proof (using the evidence above) that the control has been running for the period of time that it was supposed to.

The unfortunate reality of assurance

In previous roles, members of our team have been part of designing and running market-leading supplier risk services. But these services never actually gave any assurance, unlike audit reports (eg. SOC2, ASAE etc). Supplier risk reports typically include a familiar caveat: “this report is not an audit and does not constitute assurance”.

This is because the supplier risk service that is delivered involves the consulting firm sending a supplier a spreadsheet, which the supplier fills in, prompting the consulting firm to ask for evidence to support the responses.

This process provides little insight as to the design or operating effectiveness of a control. If the worst case happens and a supplier is breached, the organisation will point to the consulting firm, and the consulting firm will point to that statement in the report that said the service they were providing did not constitute assurance.

We need your help, Mr Dutton

The reality is that every organisation getting actual assurance over every control at each of its suppliers is just not a feasible option.

We believe Australia needs a national scheme to manage supplier risk. A scheme in which baseline security controls are properly audited for their design and operating effectiveness, where assurance is gained and results are shared as needed. This would allow organisations to focus their cyber budget and energies on gaining assurance over the specific controls at suppliers that are unique to their service arrangement.

Last week, Home Affairs Minister Peter Dutton issued a discussion paper seeking input into the nation’s 2020 cyber security strategy. This is a great opportunity for industry to put forward the importance of a national and shared approach to managing supplier risk in this country. We will be putting forward this view, and some of the ideas in this post, in our response.

We encourage those of you struggling with supplier risk to do the same. If you would like to contribute to our response, please drop us a line here.

Let’s take this seriously

Why would it be offensive when someone tells you they care about the very thing you want them to care about?  When your behaviour harms another because you overlooked something important, isn’t it good to convey that you do in fact care about that thing?

This might seem intuitive in the context of personal relationships, but often falls flat when organisations talk about privacy and cyber security. This week – in Privacy Awareness Week – we remind ourselves that demonstrating a commitment to privacy goes beyond soundbites and snappy one-liners.

[Insert company name] takes privacy and security seriously” is increasingly one of the more jarring (and ill-advised) things a company can say today, especially in the wake of a breach.

It doesn’t sit well with journalists. You can almost hear their collective sigh every time a media statement containing that phrase is launched from corporate HQ.

Yet companies do put it in there, and usually at the very top.

Earlier this year, TechCrunch journalist Zack Whittaker scoured every data breach notification in California and found a third of companies had some variation of this “common trope”.

Whittaker wasn’t impressed: “The truth is, most companies don’t care about the privacy or security of your data. They care about having to explain to their customers that their data was stolen.”

For years, companies adopted a cloak-and-dagger attitude to any public commentary about privacy and security. “We don’t discuss matters of security” was a handy way for corporate affairs teams to bat away pesky tech and infosec journos, much like they might say “the matter is before the courts” in other awkward contexts.

This approach began to fray as companies realised cyber security and privacy issues weren’t purely technical stories. Breached data impacted real people today. Vulnerable systems could affect people tomorrow. And the community was becoming more vocal and aware.

We began to see companies eager to show they cared. And so … “We take privacy and security very seriously.”

But why should that rankle so much?

Simply because we intuitively detect something’s not right when a company or a person in our life glibly tells us they hold a position that contrasts with the evidence. In fact, it’s awkward.

Ask Mark Zuckerberg. Earlier this month, standing under a banner that read “the future is private”, the Facebook CEO proclaimed privacy was at the heart of Facebook’s new strategy. The awkwardness was so intense that Zuckerberg even sought to dissolve it with humour, rather unsuccessfully.

The gap between messages of care and diligence for data protection and what consumers actually experience doesn’t only relate to Facebook. 

A number of breaches are the result of insufficient regard by a company for how customer data is used – such as unauthorised sharing with third parties – or the result of an avoidable mistake – like failing to fix a security flaw in a server where the patch has been available for months. And when companies insist they care while simultaneously trying to evade their responsibilities, tempering a sense of cynicism becomes even harder.

The state of the cyber landscape contributes too. Threats are intensifying, more breaches are happening and there’s mandatory reporting requirements. Pick up a newspaper and odds on there’s a breach story in there. It’s not unreasonable for consumers to think there’s an epidemic of businesses losing sensitive data, yet somehow they’re all identically proclaiming to take data protection very seriously. It doesn’t add up.

At the same time, it should be possible for an organisation to affirm a commitment to data protection, even in the wake of a breach. Because it’s possible for a company to care deeply about privacy and security, to have invested greatly in these areas, and still be breached.  Attackers are more skilled and determined, and its challenging to protect data that is everywhere thanks to the use of cloud technologies and third parties.

So we can cut organisations a little slack. But the way forward is not reverting to a catchy set of words alone.

As we learned from the 12-month review of the Notifiable Data Breaches scheme published this week by the Office of the Australian Information Commissioner, consumers and regulators want (and deserve) to see actions and responses that reflect empathy, accountability and transparency. They expect to see organisations show a genuine commitment to reducing harm, such in the assistance they provide victims after breach. A willingness to continuously update the public about the key details of a breach, and simple advice on what to do about it, also shows a genuine focus on the issue and a willingness to be transparent. And when company leaders are visible and take responsibility,  it tells customers they will be accountable for putting things right.

Do these things, and there’s a better chance customers will take your commitment to privacy and security seriously.

Anti-Automation

You may think from the title we’re about to say how we oppose automation or think IT spend should be directed somewhere else. We are not. We love automation and consider it a strategic imperative for most organsiations. But there is a problem: the benefits of automation apply to criminals just as much as they do to legitimate organisations.

Why criminals love automation

Success in cybercrime generally rests on two things. Having a more advanced capability than those who are defending and having the ability to scale your operation. Automation helps both of these. Cybercriminals use automated bots (we term these ‘bad bots’) to attack their victims, meaning a small number of human criminals can deliver a large return. For the criminals, fewer people means fewer people to share in the rewards and a lower risk of someone revealing the operation to the authorities or its secrets to rival criminals. Coupled with machine learning and criminals can rapidly adapt how their bots attack victims based on the experiences of attacking their other victims. As victims improve their security, so the bots are able to learn from other cases how resume their attacks.

What attacks are typically automated?

Attacks take many forms but two stand out: financial fraud and form filling. For financial fraud, bad bots will exploit organisations’ payment gateways to wash through transactions using stolen credit card details. For major retailers, the transactions will typically be small (often $10.00) to test which card details are valid and working versus others. The criminals then use successful details to commit larger frauds until the card details no longer work. For form filling, bad bots will exploit websites that have forms for users to provide information. Depending on the site and the attack vector of the bot, the form filling attacks could be used for a number of outcomes such as filling a CRM system with dummy ‘new customer’ data, content scraping and advanced DDoS attacks that, due to automation, can be configured to reverse engineer WAF rules to work out how to get through undetected.

Real business impact

The reason we at elevenM feel strongly about this is that we are seeing real business impact from these attacks.  Simple metrics like OPEX costs for web infrastructure. We have seen businesses who are dealing with such automated traffic have their infrastructure cost increase by 35%. There are clear productivity impacts from managing customer complaints from password lockouts. This can be crippling to high volume low workforce businesses. And then there is fraud, something that not only impacts the business but the market and society as a whole.

How can we defend against them?

Traditional methods of blocking attack traffic such as IP based blocking, traffic rate controls, signatures and domain-based reputation are no longer effective. The bots are learning and adapting too quickly. Instead, anti-automation products work by sitting between the public internet and the organisation’s digital assets. These products have their own algorithms to detect non-human traffic. The algorithms look at a variety of characteristics of the traffic such as what browser and devices the traffic is coming from and they can even assess the movement of the device to determine if it looks human. And if it is not sure, it can send issue challenges (such as a reCaptcha-style request) to confirm.  Once the traffic has been evaluated; human traffic is allowed through and automated traffic is blocked.  

How can we deploy these defences?

elevenM has worked with our clients to deploy anti-automation tools.  The market is still new and as such the tools have a spectrum of effectiveness as well as architectural impacts that require time and effort to work through.  In an environment where time is short, this poses a significant transformation challenge.  Having done this before and being familiar with the products out there, we can work with you to identify and deploy anti-automation protection tools with the supporting processes.  The key first step, as always with Cybersecurity, is to look at your attack surface and the vectors that are most vulnerable to automated attacks, subject to risk and cost assessment of what happens if attacks are successful.  From there we work with you to design a protection approach that we can work with you to implement.

Conclusion

Everyone is rightly focussing on automation and machine learning, but so are the criminals. It is crucial to look at your attack surface and identify where automated attacks are happening. There are now tools available to help significantly reduce the risks associated with automated cybercrime.

If you would like to discuss this further, please contact us using the details below.