Solving ransomware

We’re back in Baltimore. Unfortunately not to relive Arjun’s favourite pithy one-liners from The Wire, but to talk about something from the non-fiction genre: Ransomware.

In just a few years, ransomware has gone from nothing to a multi-billion dollar industry. And it continues to grow. It’s little wonder that law enforcement are quietly holding crises summits to ask for help.

In May of this year, the City of Baltimore was hit with a ransomware attack. The ransomware used was called RobbinHood and it encrypted an estimated 10,000 networked computers. Email systems and payment platforms were taken offline. Baltimore’s property market also took a hit as people were unable to complete real estate sales.

One click away

Like most public sector technology environments, there appears to have been a mix of old and new systems on the City of Baltimore networks. Precisely because they are old, aging systems are typically unable to be “patched” or updated for known security threats, making them vulnerable.

But getting funding to replace or update computing systems is difficult, especially when you are competing with critical services like police, fire and hospitals.

Given the hard reality that many large networks will have a high volume of outdated, and therefore vulnerable, systems that are only one mouse click away from becoming infected, should we not focus more on preventing malware from propagating?

Trust

Most global corporate networks operate using a trust principal. If you are part of the same domain or group of companies you are trusted to connect to each other’s network. This has obvious benefits, but it also brings a number of risks when we consider threats like ransomware.

Strategies

There are many strategies to mitigate the risk of a ransomware outbreak. Back up your files, patch your computers and avoid opening suspicious links or attachments are commonly advised. At elevenM, we recommend these strategies, however we also work closely with our clients on an often overlooked piece of the puzzle, Active Directory. The theory being: if your network cannot be used to spread malware, your exposure to ransomware is significantly reduced.

Monitoring Active Directory for threats

To understand this in more detail, let’s go back to Baltimore. According to reports, the Baltimore attack came through a breach of the City’s Domain Controller, a key piece of the Active Directory infrastructure. This was then used to deliver ransomware to 10,000 machines. What if Balitmore’s Active Directory had been integrated with security tools that allowed it to monitor, detect, and contain ransomware instead of being used to propagate it?

Working with our clients’ and Active Director specific tools we have been able to separate and monitor Active Directory based threat indicators including:

  • Lateral movement restriction
  • Obsolete systems
  • Brute force detection
  • Anonymous users behaviour

All the pieces of the puzzle

In mitigating cyber threats, defence teams today have access to many tools and strategies. Often, there emerges a promised silver bullet to a particular threat. But the truth is that most threats will require a layered defence, involving multiple controls and core knowledge of common IT infrastructure (like Active Directory). Or to put it again in the language of the streets of Baltimore: “All the pieces matter“.

Want to hear more? Drop us a line at hello@elevenM.com

Mr Dutton, we need help with supplier risk

When we speak with heads of cyber, risk and privacy, eventually there comes a point when brows become more furrowed and the conversation turns to suppliers and the risk they pose.

There are a couple of likely triggers. First, APRA’s new CPS 234 regulations require regulated entities to evaluate a supplier’s information security controls. Second, there’s heightened awareness now in the business community that many data breaches suffered by organisations are ultimately a result of the breach of a supplier.

The problem space

Organisations today use hundreds or even thousands of suppliers for a multitude of services. The data shared and access given to deliver those services is increasingly so extensive that it has blurred the boundaries between organisation and supplier. In many cases, the supplier’s risk is the organisation’s risk.

Gaining assurance over the risk posed by a large number of suppliers, without using up every dollar of budget allocated to the cyber team, is an increasingly difficult challenge.

Assurance

To appreciate the scope of the challenge, we first need to understand the concept of “assurance”, a term not always well understood outside the worlds of risk and assurance. So let’s take a moment to clarify, using DLP (Data Loss Prevention) as an example.

To gain assurance over a control you are required to evaluate the design and operating effectiveness of that control.  APRA’s new information security regulation CPS234 states that regulated entities require both when assessing the information security controls they rely upon to manage their risk, even if that control sits with a supplier. So what would that entail in this example?

  • Design effectiveness would be confirming that the DLP tool covered all information sources and potential exit points for your data. It would involve making sure data is marked and therefore could be monitored by the tool. Evidence of the control working would be kept.
  • Operating effectiveness would be the proof (using the evidence above) that the control has been running for the period of time that it was supposed to.

The unfortunate reality of assurance

In previous roles, members of our team have been part of designing and running market-leading supplier risk services. But these services never actually gave any assurance, unlike audit reports (eg. SOC2, ASAE etc). Supplier risk reports typically include a familiar caveat: “this report is not an audit and does not constitute assurance”.

This is because the supplier risk service that is delivered involves the consulting firm sending a supplier a spreadsheet, which the supplier fills in, prompting the consulting firm to ask for evidence to support the responses.

This process provides little insight as to the design or operating effectiveness of a control. If the worst case happens and a supplier is breached, the organisation will point to the consulting firm, and the consulting firm will point to that statement in the report that said the service they were providing did not constitute assurance.

We need your help, Mr Dutton

The reality is that every organisation getting actual assurance over every control at each of its suppliers is just not a feasible option.

We believe Australia needs a national scheme to manage supplier risk. A scheme in which baseline security controls are properly audited for their design and operating effectiveness, where assurance is gained and results are shared as needed. This would allow organisations to focus their cyber budget and energies on gaining assurance over the specific controls at suppliers that are unique to their service arrangement.

Last week, Home Affairs Minister Peter Dutton issued a discussion paper seeking input into the nation’s 2020 cyber security strategy. This is a great opportunity for industry to put forward the importance of a national and shared approach to managing supplier risk in this country. We will be putting forward this view, and some of the ideas in this post, in our response.

We encourage those of you struggling with supplier risk to do the same. If you would like to contribute to our response, please drop us a line here.

Let’s take this seriously

Why would it be offensive when someone tells you they care about the very thing you want them to care about?  When your behaviour harms another because you overlooked something important, isn’t it good to convey that you do in fact care about that thing?

This might seem intuitive in the context of personal relationships, but often falls flat when organisations talk about privacy and cyber security. This week – in Privacy Awareness Week – we remind ourselves that demonstrating a commitment to privacy goes beyond soundbites and snappy one-liners.

[Insert company name] takes privacy and security seriously” is increasingly one of the more jarring (and ill-advised) things a company can say today, especially in the wake of a breach.

It doesn’t sit well with journalists. You can almost hear their collective sigh every time a media statement containing that phrase is launched from corporate HQ.

Yet companies do put it in there, and usually at the very top.

Earlier this year, TechCrunch journalist Zack Whittaker scoured every data breach notification in California and found a third of companies had some variation of this “common trope”.

Whittaker wasn’t impressed: “The truth is, most companies don’t care about the privacy or security of your data. They care about having to explain to their customers that their data was stolen.”

For years, companies adopted a cloak-and-dagger attitude to any public commentary about privacy and security. “We don’t discuss matters of security” was a handy way for corporate affairs teams to bat away pesky tech and infosec journos, much like they might say “the matter is before the courts” in other awkward contexts.

This approach began to fray as companies realised cyber security and privacy issues weren’t purely technical stories. Breached data impacted real people today. Vulnerable systems could affect people tomorrow. And the community was becoming more vocal and aware.

We began to see companies eager to show they cared. And so … “We take privacy and security very seriously.”

But why should that rankle so much?

Simply because we intuitively detect something’s not right when a company or a person in our life glibly tells us they hold a position that contrasts with the evidence. In fact, it’s awkward.

Ask Mark Zuckerberg. Earlier this month, standing under a banner that read “the future is private”, the Facebook CEO proclaimed privacy was at the heart of Facebook’s new strategy. The awkwardness was so intense that Zuckerberg even sought to dissolve it with humour, rather unsuccessfully.

The gap between messages of care and diligence for data protection and what consumers actually experience doesn’t only relate to Facebook. 

A number of breaches are the result of insufficient regard by a company for how customer data is used – such as unauthorised sharing with third parties – or the result of an avoidable mistake – like failing to fix a security flaw in a server where the patch has been available for months. And when companies insist they care while simultaneously trying to evade their responsibilities, tempering a sense of cynicism becomes even harder.

The state of the cyber landscape contributes too. Threats are intensifying, more breaches are happening and there’s mandatory reporting requirements. Pick up a newspaper and odds on there’s a breach story in there. It’s not unreasonable for consumers to think there’s an epidemic of businesses losing sensitive data, yet somehow they’re all identically proclaiming to take data protection very seriously. It doesn’t add up.

At the same time, it should be possible for an organisation to affirm a commitment to data protection, even in the wake of a breach. Because it’s possible for a company to care deeply about privacy and security, to have invested greatly in these areas, and still be breached.  Attackers are more skilled and determined, and its challenging to protect data that is everywhere thanks to the use of cloud technologies and third parties.

So we can cut organisations a little slack. But the way forward is not reverting to a catchy set of words alone.

As we learned from the 12-month review of the Notifiable Data Breaches scheme published this week by the Office of the Australian Information Commissioner, consumers and regulators want (and deserve) to see actions and responses that reflect empathy, accountability and transparency. They expect to see organisations show a genuine commitment to reducing harm, such in the assistance they provide victims after breach. A willingness to continuously update the public about the key details of a breach, and simple advice on what to do about it, also shows a genuine focus on the issue and a willingness to be transparent. And when company leaders are visible and take responsibility,  it tells customers they will be accountable for putting things right.

Do these things, and there’s a better chance customers will take your commitment to privacy and security seriously.

Anti-Automation

You may think from the title we’re about to say how we oppose automation or think IT spend should be directed somewhere else. We are not. We love automation and consider it a strategic imperative for most organsiations. But there is a problem: the benefits of automation apply to criminals just as much as they do to legitimate organisations.

Why criminals love automation

Success in cybercrime generally rests on two things. Having a more advanced capability than those who are defending and having the ability to scale your operation. Automation helps both of these. Cybercriminals use automated bots (we term these ‘bad bots’) to attack their victims, meaning a small number of human criminals can deliver a large return. For the criminals, fewer people means fewer people to share in the rewards and a lower risk of someone revealing the operation to the authorities or its secrets to rival criminals. Coupled with machine learning and criminals can rapidly adapt how their bots attack victims based on the experiences of attacking their other victims. As victims improve their security, so the bots are able to learn from other cases how resume their attacks.

What attacks are typically automated?

Attacks take many forms but two stand out: financial fraud and form filling. For financial fraud, bad bots will exploit organisations’ payment gateways to wash through transactions using stolen credit card details. For major retailers, the transactions will typically be small (often $10.00) to test which card details are valid and working versus others. The criminals then use successful details to commit larger frauds until the card details no longer work. For form filling, bad bots will exploit websites that have forms for users to provide information. Depending on the site and the attack vector of the bot, the form filling attacks could be used for a number of outcomes such as filling a CRM system with dummy ‘new customer’ data, content scraping and advanced DDoS attacks that, due to automation, can be configured to reverse engineer WAF rules to work out how to get through undetected.

Real business impact

The reason we at elevenM feel strongly about this is that we are seeing real business impact from these attacks.  Simple metrics like OPEX costs for web infrastructure. We have seen businesses who are dealing with such automated traffic have their infrastructure cost increase by 35%. There are clear productivity impacts from managing customer complaints from password lockouts. This can be crippling to high volume low workforce businesses. And then there is fraud, something that not only impacts the business but the market and society as a whole.

How can we defend against them?

Traditional methods of blocking attack traffic such as IP based blocking, traffic rate controls, signatures and domain-based reputation are no longer effective. The bots are learning and adapting too quickly. Instead, anti-automation products work by sitting between the public internet and the organisation’s digital assets. These products have their own algorithms to detect non-human traffic. The algorithms look at a variety of characteristics of the traffic such as what browser and devices the traffic is coming from and they can even assess the movement of the device to determine if it looks human. And if it is not sure, it can send issue challenges (such as a reCaptcha-style request) to confirm.  Once the traffic has been evaluated; human traffic is allowed through and automated traffic is blocked.  

How can we deploy these defences?

elevenM has worked with our clients to deploy anti-automation tools.  The market is still new and as such the tools have a spectrum of effectiveness as well as architectural impacts that require time and effort to work through.  In an environment where time is short, this poses a significant transformation challenge.  Having done this before and being familiar with the products out there, we can work with you to identify and deploy anti-automation protection tools with the supporting processes.  The key first step, as always with Cybersecurity, is to look at your attack surface and the vectors that are most vulnerable to automated attacks, subject to risk and cost assessment of what happens if attacks are successful.  From there we work with you to design a protection approach that we can work with you to implement.

Conclusion

Everyone is rightly focussing on automation and machine learning, but so are the criminals. It is crucial to look at your attack surface and identify where automated attacks are happening. There are now tools available to help significantly reduce the risks associated with automated cybercrime.

If you would like to discuss this further, please contact us using the details below.

Happy birthday Notifiable Data Breaches Scheme. How have you performed?

A year ago today, Australian businesses became subject to a mandatory data breach reporting scheme. Angst and anticipation came with its introduction – angst for the disruption it might have on unprepared businesses and anticipation of the positive impact it would have for privacy.

Twelve months on, consumers are arguably more troubled about the lack of safeguards for privacy, while businesses face the prospect of further regulation and oversight. Without a fundamental shift in how privacy is addressed, the cycle of heightened concern followed by further regulation looks set to continue.

It would be folly to pin all our problems on the Notifiable Data Breaches (NDB) scheme. Some of the headline events that exacerbated community privacy concerns in the past year fell outside its remit. The Facebook / Cambridge Analytica scandal stands out as a striking example.

The NDB scheme has also made its mark. For one, it has heralded a more transparent view of the state of breaches. More than 800 data breaches have been reported in the first year of the scheme.

The data also tells us more about how breaches are happening. Malicious attacks are behind the majority of breaches, though humans play a substantial role. Not only do about a third of breaches involve a human error, such as sending a customer’s personal information to the wrong person, but a large portion of malicious attacks directly involve human factors such as convincing someone to give away their password.

And for the most part, businesses got on with the task of complying. In many organisations, the dialogue has shifted from preventing breaches to being well prepared to manage and respond to them. This is a fundamentally positive outcome – as data collection grows and cyber threats get more pernicious, breaches will become more likely and businesses, as they do with the risk of fire, ought to have plans and drills to respond effectively.

And still, the jury is out on whether consumers feel more protected. Despite the number of data breach notifications in the past year, events suggest it would be difficult to say transparency alone had improved the way businesses handle personal information.

The sufficiency of our legislative regime is an open question. The ACCC is signalling it will play a stronger role in privacy, beginning with recommending a strengthening of protections under the Privacy Act. Last May, the Senate also passed a motion to bring Australia’s privacy regime in line with Europe’s General Data Protection Regulation (GDPR), a much more stringent and far-reaching set of protections.

Australian businesses ought not be surprised. The Senate’s intent aligns to what is occurring internationally. In the US, where Facebook’s repeated breaches have catalysed the public and polity, moves are afoot towards new federal privacy legislation. States like California have already brought in GDPR-like legislation, while Asian countries are similarly strengthening their data protection regimes. With digital protections sharpening as a public concern, a federal election in Australia this year further adds to the possibility of a strengthened approach to privacy by authorities.

Businesses will want to free themselves of chasing the tail of compliance to an ever-moving regulatory landscape. Given the public focus on issues of trust, privacy also emerges as a potential competitive differentiator.

A more proactive and embedded approach to privacy addresses both these outcomes. Privacy by design is emerging as a growing discipline by which privacy practices are embedded at an early stage. In short, with privacy in mind at an early stage, new business initiatives can be designed to meet privacy requirements before they are locked into a particular course of action.

We also need to look to the horizon, and it’s not as far away as we think. Artificial intelligence (AI) is already pressing deep within many organisations, and raises fundamental questions about whether current day privacy approaches are sufficient. AI represents a paradigm shift that challenges our ability to know in advance why we are collecting data and how we intend to use it.

And so, while new laws introduced in the past 12 months were a major step forward in the collective journey to better privacy, in many ways the conversation is just starting.

The difference between NIST CSF maturity and managing cyber risk

Yesterday marked the fifth anniversary of what we here at elevenM think is the best cyber security framework in the world, the NIST Cybersecurity Framework (CSF). While we could be writing about how helpful the framework has been in mapping current and desired cyber capabilities or prioritising investment, we thought it important to tackle a problem we are seeing more and more with the CSF: The use of the CSF as an empirical measurement of an organisation’s cyber risk posture.

Use versus intention

Let’s start with a quick fact. The CSF was never designed to provide a quantitative measurement of cyber risk mitigation. Instead, it was designed as a capability guide. A tool to help organisations map out their current cyber capability to a set of capabilities which NIST consider to be best practice.

NIST CSF ’Maturity’

Over the past five years, consultancies and cyber security teams have used the CSF as a way to demonstrate to those not familiar with cyber capabilities, that they have the right ones in place. Most have done this by assigning a maturity score to each subcategory of the CSF. Just to be clear, we consider a NIST CSF maturity assessment to be a worthwhile exercise. We have even built a platform to help our clients to do just that. What we do not support however, is the use of maturity ratings as a measurement of cyber risk mitigation.

NIST CSF versus NIST 800-53

This is where the devil truly is in the detail. For those unfamiliar, NIST CSF maturity is measured using a set maturity statements (note that NIST have never produced their own so most organisations or consultancies have developed proprietary statements: elevenM included) against the Capability Maturity Model (CMM). As you can therefore imagine, the assessment that would be performed to determine one maturity level against another is often highly subjective, usually via interview and document review. In addition to this, these maturity statements do not address the specific cyber threats or risks to the organisation but are designed to determine if the organisation has the capability in place.

NIST 800-53 on the other hand is NIST’s cyber security controls library. A set of best practice controls which can be formally assessed for both design and operating effectiveness as part of an assurance program. Not subjective, rather an empirical and evidence-based assessment that can be aligned to the CSF (NIST has provided this mapping) or aligned to a specific organisational threat. Do you see what we are getting at here?

Which is the correct approach?

Like most things, it depends on your objective. If you want to demonstrate to those unfamiliar with cyber operations that you have considered all that you should, or if you want to build a capability, CSF is the way to go. (Noting that doing the CSF maturity assessment without assessing the underlying controls limits the amount of trust stakeholders can place on the maturity rating)

If however, you want to demonstrate that you are actively managing the cyber risk of your organisation, we advise our clients to assess the design and operating effectiveness of their cyber security controls. How do you know if you have the right controls to manage the cyber risks your organisation faces? We will get to that soon. Stay tuned.