Solving ransomware

We’re back in Baltimore. Unfortunately not to relive Arjun’s favourite pithy one-liners from The Wire, but to talk about something from the non-fiction genre: Ransomware.

In just a few years, ransomware has gone from nothing to a multi-billion dollar industry. And it continues to grow. It’s little wonder that law enforcement are quietly holding crises summits to ask for help.

In May of this year, the City of Baltimore was hit with a ransomware attack. The ransomware used was called RobbinHood and it encrypted an estimated 10,000 networked computers. Email systems and payment platforms were taken offline. Baltimore’s property market also took a hit as people were unable to complete real estate sales.

One click away

Like most public sector technology environments, there appears to have been a mix of old and new systems on the City of Baltimore networks. Precisely because they are old, aging systems are typically unable to be “patched” or updated for known security threats, making them vulnerable.

But getting funding to replace or update computing systems is difficult, especially when you are competing with critical services like police, fire and hospitals.

Given the hard reality that many large networks will have a high volume of outdated, and therefore vulnerable, systems that are only one mouse click away from becoming infected, should we not focus more on preventing malware from propagating?

Trust

Most global corporate networks operate using a trust principal. If you are part of the same domain or group of companies you are trusted to connect to each other’s network. This has obvious benefits, but it also brings a number of risks when we consider threats like ransomware.

Strategies

There are many strategies to mitigate the risk of a ransomware outbreak. Back up your files, patch your computers and avoid opening suspicious links or attachments are commonly advised. At elevenM, we recommend these strategies, however we also work closely with our clients on an often overlooked piece of the puzzle, Active Directory. The theory being: if your network cannot be used to spread malware, your exposure to ransomware is significantly reduced.

Monitoring Active Directory for threats

To understand this in more detail, let’s go back to Baltimore. According to reports, the Baltimore attack came through a breach of the City’s Domain Controller, a key piece of the Active Directory infrastructure. This was then used to deliver ransomware to 10,000 machines. What if Balitmore’s Active Directory had been integrated with security tools that allowed it to monitor, detect, and contain ransomware instead of being used to propagate it?

Working with our clients’ and Active Director specific tools we have been able to separate and monitor Active Directory based threat indicators including:

  • Lateral movement restriction
  • Obsolete systems
  • Brute force detection
  • Anonymous users behaviour

All the pieces of the puzzle

In mitigating cyber threats, defence teams today have access to many tools and strategies. Often, there emerges a promised silver bullet to a particular threat. But the truth is that most threats will require a layered defence, involving multiple controls and core knowledge of common IT infrastructure (like Active Directory). Or to put it again in the language of the streets of Baltimore: “All the pieces matter“.

Want to hear more? Drop us a line at hello@elevenM.com

The unfairness of cyber awareness

elevenM Principal Arjun Ramachandran explores why cyber awareness matters, despite the prevalence of seemingly unstoppable sophisticated cyber-attacks.



“Deserve got nuthin’ to do with it. It’s his time, that’s all.”
– Snoop, The Wire.

We want to believe our behaviours solely determine the outcomes we get. But it’s not always the case, especially in the complex cyber realm.

The brilliant US drama The Wire made an artform of summing up life’s hard truths in pithy one-liners, delivered in the language of the street. In Season 5, drug gang member Snoop is asked by a junior gang member whether a target really “deserves” to be “hit”. Her response (above) lays bare the unfairness at the heart of the adversarial drug war.

Cyber security too, ain’t always fair. The existence of a committed, human adversary is a significant and differentiating feature of cyber risk that those of us involved in the field should keep in mind.

Especially in the areas of security training and education. We often seek inspiration from areas like public health, where highly-acclaimed campaigns have raised awareness of the risks of smoking and sun cancer, driving down public exposure to these activities and vastly reducing the incidence of bad outcomes.

But these areas don’t have a human adversary. In cyber, for all of our awareness and reduction of risky behaviours, it remains the case that a determined, highly-sophisticated attacker could still get at a company’s crown jewels by persistently probing for small areas or moments of weakness.

The attack on the Australian National University is a shining example, recently and evocatively labelled a “diamond heist” by its vice-chancellor, rather than a “smash and grab”.

“It was an extremely sophisticated operation, most likely carried out by a team of between five to 15 people working around the clock”. – ANU vice-chancellor Brian Schmidt

While it may be true that a well-educated and aware workforce might not “deserve” to get hacked, Snoop’s street wisdom and the ANU hack suggest that increasing the awareness of end users may still not be enough to prevent the most sophisticated attacks, such as those by highly-skilled state-sponsored attackers.

And awareness on its own stands to be defeated. The UK’s National Cyber Security Centre points out that people-focused activities such as education must come with technical controls, as part of a multi-layered approach. That’s a sentiment recently echoed by the Australian Government.

“But like all other forms of security, awareness is a complement to, not replacement for, the availability of secure features. For example, drivers are provided with a seat belt in addition to education about the importance of road safety and incentives to use the seat belt. And the same expectations and requirements we have where safety is paramount should apply in cyberspace” – Australia’s 2020 Cyber Security Strategy – A call for views

But we also can’t throw the baby out with the bath water.

In our travels, we occasionally come across a certain bluntness or defeatism about cyber awareness. Because of the success of and attention given to state-sponsored attacks, education and awareness is labelled “ineffective”, technical controls are deemed all that matter.

In our view this is a severe over-correction.

It pays to remember that there exists a broad swathe of attackers – not every attacker coming for a small business (or even an enterprise) is bankrolled by a rogue state and has access to an arsenal of zero-day exploits.  

In fact, many are commercially-motivated cybercriminals of varying levels of ability, plying their trade using commodity tools purchased off underground marketplaces. They can be as sensitive to cost pressures as the CEO of a cash-poor business. Anything that makes it harder (ie costlier) to achieve their goals may be enough to deter these actors to move on to another easier, more cost-effective target.

One of the ways we help businesses do this – such as through our recently developed learning packages – is by raising employees’ awareness to the risks and also providing actionable advice on how they can make the average cyber attacker’s life that little bit more frustrating. Maybe a stronger password, or a healthier skepticism to dubious emails will do the trick.

While technical controls might overtake end-user awareness as the best response to a specific cyber threat (eg. some now argue multi-factor authentication should be prioritised as a response to phishing), when that happens an effective awareness program can re-deploy the fruitful conversation it has established with staff to the next evolving area of risk (for eg. how staff use cloud services).

In this way, over the long term awareness activities also continually embed a sense of responsibility and ownership in a workforce, acting as a precursor to and an enabler of a secure culture.

Mr Dutton, we need help with supplier risk

When we speak with heads of cyber, risk and privacy, eventually there comes a point when brows become more furrowed and the conversation turns to suppliers and the risk they pose.

There are a couple of likely triggers. First, APRA’s new CPS 234 regulations require regulated entities to evaluate a supplier’s information security controls. Second, there’s heightened awareness now in the business community that many data breaches suffered by organisations are ultimately a result of the breach of a supplier.

The problem space

Organisations today use hundreds or even thousands of suppliers for a multitude of services. The data shared and access given to deliver those services is increasingly so extensive that it has blurred the boundaries between organisation and supplier. In many cases, the supplier’s risk is the organisation’s risk.

Gaining assurance over the risk posed by a large number of suppliers, without using up every dollar of budget allocated to the cyber team, is an increasingly difficult challenge.

Assurance

To appreciate the scope of the challenge, we first need to understand the concept of “assurance”, a term not always well understood outside the worlds of risk and assurance. So let’s take a moment to clarify, using DLP (Data Loss Prevention) as an example.

To gain assurance over a control you are required to evaluate the design and operating effectiveness of that control.  APRA’s new information security regulation CPS234 states that regulated entities require both when assessing the information security controls they rely upon to manage their risk, even if that control sits with a supplier. So what would that entail in this example?

  • Design effectiveness would be confirming that the DLP tool covered all information sources and potential exit points for your data. It would involve making sure data is marked and therefore could be monitored by the tool. Evidence of the control working would be kept.
  • Operating effectiveness would be the proof (using the evidence above) that the control has been running for the period of time that it was supposed to.

The unfortunate reality of assurance

In previous roles, members of our team have been part of designing and running market-leading supplier risk services. But these services never actually gave any assurance, unlike audit reports (eg. SOC2, ASAE etc). Supplier risk reports typically include a familiar caveat: “this report is not an audit and does not constitute assurance”.

This is because the supplier risk service that is delivered involves the consulting firm sending a supplier a spreadsheet, which the supplier fills in, prompting the consulting firm to ask for evidence to support the responses.

This process provides little insight as to the design or operating effectiveness of a control. If the worst case happens and a supplier is breached, the organisation will point to the consulting firm, and the consulting firm will point to that statement in the report that said the service they were providing did not constitute assurance.

We need your help, Mr Dutton

The reality is that every organisation getting actual assurance over every control at each of its suppliers is just not a feasible option.

We believe Australia needs a national scheme to manage supplier risk. A scheme in which baseline security controls are properly audited for their design and operating effectiveness, where assurance is gained and results are shared as needed. This would allow organisations to focus their cyber budget and energies on gaining assurance over the specific controls at suppliers that are unique to their service arrangement.

Last week, Home Affairs Minister Peter Dutton issued a discussion paper seeking input into the nation’s 2020 cyber security strategy. This is a great opportunity for industry to put forward the importance of a national and shared approach to managing supplier risk in this country. We will be putting forward this view, and some of the ideas in this post, in our response.

We encourage those of you struggling with supplier risk to do the same. If you would like to contribute to our response, please drop us a line here.

Anti-Automation

You may think from the title we’re about to say how we oppose automation or think IT spend should be directed somewhere else. We are not. We love automation and consider it a strategic imperative for most organsiations. But there is a problem: the benefits of automation apply to criminals just as much as they do to legitimate organisations.

Why criminals love automation

Success in cybercrime generally rests on two things. Having a more advanced capability than those who are defending and having the ability to scale your operation. Automation helps both of these. Cybercriminals use automated bots (we term these ‘bad bots’) to attack their victims, meaning a small number of human criminals can deliver a large return. For the criminals, fewer people means fewer people to share in the rewards and a lower risk of someone revealing the operation to the authorities or its secrets to rival criminals. Coupled with machine learning and criminals can rapidly adapt how their bots attack victims based on the experiences of attacking their other victims. As victims improve their security, so the bots are able to learn from other cases how resume their attacks.

What attacks are typically automated?

Attacks take many forms but two stand out: financial fraud and form filling. For financial fraud, bad bots will exploit organisations’ payment gateways to wash through transactions using stolen credit card details. For major retailers, the transactions will typically be small (often $10.00) to test which card details are valid and working versus others. The criminals then use successful details to commit larger frauds until the card details no longer work. For form filling, bad bots will exploit websites that have forms for users to provide information. Depending on the site and the attack vector of the bot, the form filling attacks could be used for a number of outcomes such as filling a CRM system with dummy ‘new customer’ data, content scraping and advanced DDoS attacks that, due to automation, can be configured to reverse engineer WAF rules to work out how to get through undetected.

Real business impact

The reason we at elevenM feel strongly about this is that we are seeing real business impact from these attacks.  Simple metrics like OPEX costs for web infrastructure. We have seen businesses who are dealing with such automated traffic have their infrastructure cost increase by 35%. There are clear productivity impacts from managing customer complaints from password lockouts. This can be crippling to high volume low workforce businesses. And then there is fraud, something that not only impacts the business but the market and society as a whole.

How can we defend against them?

Traditional methods of blocking attack traffic such as IP based blocking, traffic rate controls, signatures and domain-based reputation are no longer effective. The bots are learning and adapting too quickly. Instead, anti-automation products work by sitting between the public internet and the organisation’s digital assets. These products have their own algorithms to detect non-human traffic. The algorithms look at a variety of characteristics of the traffic such as what browser and devices the traffic is coming from and they can even assess the movement of the device to determine if it looks human. And if it is not sure, it can send issue challenges (such as a reCaptcha-style request) to confirm.  Once the traffic has been evaluated; human traffic is allowed through and automated traffic is blocked.  

How can we deploy these defences?

elevenM has worked with our clients to deploy anti-automation tools.  The market is still new and as such the tools have a spectrum of effectiveness as well as architectural impacts that require time and effort to work through.  In an environment where time is short, this poses a significant transformation challenge.  Having done this before and being familiar with the products out there, we can work with you to identify and deploy anti-automation protection tools with the supporting processes.  The key first step, as always with Cybersecurity, is to look at your attack surface and the vectors that are most vulnerable to automated attacks, subject to risk and cost assessment of what happens if attacks are successful.  From there we work with you to design a protection approach that we can work with you to implement.

Conclusion

Everyone is rightly focussing on automation and machine learning, but so are the criminals. It is crucial to look at your attack surface and identify where automated attacks are happening. There are now tools available to help significantly reduce the risks associated with automated cybercrime.

If you would like to discuss this further, please contact us using the details below.

The difference between NIST CSF maturity and managing cyber risk

Yesterday marked the fifth anniversary of what we here at elevenM think is the best cyber security framework in the world, the NIST Cybersecurity Framework (CSF). While we could be writing about how helpful the framework has been in mapping current and desired cyber capabilities or prioritising investment, we thought it important to tackle a problem we are seeing more and more with the CSF: The use of the CSF as an empirical measurement of an organisation’s cyber risk posture.

Use versus intention

Let’s start with a quick fact. The CSF was never designed to provide a quantitative measurement of cyber risk mitigation. Instead, it was designed as a capability guide. A tool to help organisations map out their current cyber capability to a set of capabilities which NIST consider to be best practice.

NIST CSF ’Maturity’

Over the past five years, consultancies and cyber security teams have used the CSF as a way to demonstrate to those not familiar with cyber capabilities, that they have the right ones in place. Most have done this by assigning a maturity score to each subcategory of the CSF. Just to be clear, we consider a NIST CSF maturity assessment to be a worthwhile exercise. We have even built a platform to help our clients to do just that. What we do not support however, is the use of maturity ratings as a measurement of cyber risk mitigation.

NIST CSF versus NIST 800-53

This is where the devil truly is in the detail. For those unfamiliar, NIST CSF maturity is measured using a set maturity statements (note that NIST have never produced their own so most organisations or consultancies have developed proprietary statements: elevenM included) against the Capability Maturity Model (CMM). As you can therefore imagine, the assessment that would be performed to determine one maturity level against another is often highly subjective, usually via interview and document review. In addition to this, these maturity statements do not address the specific cyber threats or risks to the organisation but are designed to determine if the organisation has the capability in place.

NIST 800-53 on the other hand is NIST’s cyber security controls library. A set of best practice controls which can be formally assessed for both design and operating effectiveness as part of an assurance program. Not subjective, rather an empirical and evidence-based assessment that can be aligned to the CSF (NIST has provided this mapping) or aligned to a specific organisational threat. Do you see what we are getting at here?

Which is the correct approach?

Like most things, it depends on your objective. If you want to demonstrate to those unfamiliar with cyber operations that you have considered all that you should, or if you want to build a capability, CSF is the way to go. (Noting that doing the CSF maturity assessment without assessing the underlying controls limits the amount of trust stakeholders can place on the maturity rating)

If however, you want to demonstrate that you are actively managing the cyber risk of your organisation, we advise our clients to assess the design and operating effectiveness of their cyber security controls. How do you know if you have the right controls to manage the cyber risks your organisation faces? We will get to that soon. Stay tuned.

Our thoughts on the year ahead

At elevenM, we love shooting the breeze about all things work and play. We recently got together as a team to kick off the new year, share what we’d been up to and the thoughts inspiring us as we kick off 2019. Here’s a summary…

Early in the new year, under a beating sun at the Sydney Cricket Ground, our principal Arjun Ramachandran found himself thinking about cyber risk.

“Indian batsman Cheteshwar Pujara was piling on the runs and I realised – ‘I’m watching a masterclass in managing risk’. He’s not the fanciest or most talented batsman going around, but what Pujara has is total command over his own strengths and weaknesses. He knows when to be aggressive and when to let the ball go. In the face of complex external threats, I was struck by how much confidence comes from knowing your own capabilities and posture.”

A geeky thought to have at the cricket? No doubt. But professional parallels emerge when you least expect them. Particularly after a frantic year in which threats intensified, breaches got bigger, and major new privacy regulations came into force.

Is there privacy in the Home?

Far away from the cricket, our principal Melanie Marks was also having what she describes as a “summer quandary”. Like many people, Melanie this summer had her first extended experience of a virtual assistant (Google Home) over the break.

“These AI assistants are a lot of fun to engage with and offer endless trivia, convenience and integrated home entertainment without having to leave the comfort of the couch,” Melanie says. “However, it’s easy to forget they’re there and it’s hard to understand their collection practices, retention policies and deletion procedures (not to mention how they de-identify data, or the third parties they rely upon).”

Melanie has a challenge for Google in 2019: empower your virtual assistant to answer the question: “Hey Google – how long do you keep my data?” as quickly and clearly as it answers “How do you make an Old Fashioned?”.

Another of our principals and privacy stars Sheila Fitzpatrick has also been pondering the growing tension between new technologies and privacy. Sheila expects emerging technologies like AI and machine learning to keep pushing the boundaries of privacy rights in 2019.

“Many of these technologies have the ‘cool’ factor but do not embrace the fundamental right to privacy,” Sheila says. “They believe the more data they have to work with, the more they can expand the capabilities of their products without considering the negative impact on privacy rights.”

The consumer issue of our time

We expect to see the continued elevation of privacy as a public issue in 2019.  Watch for Australia’s consumer watchdog, the Australian Competition and Consumer Commission, to get more involved in privacy, Melanie says. The ACCC foreshadowed in December via its preliminary report into digital platforms.

Business will also latch onto the idea of privacy as a core consumer issue, says our Head of Product Development Alistair Macleod. Some are already using it as a competitive differentiator, Alistair notes, pointing to manufacturers promoting privacy-enhancing features in new products and Apple’s hard-to-miss pro-privacy billboard at the CES conference just this week.

We’ll also see further international expansion of privacy laws in 2019, Sheila says. Particularly in Asia Pacific and Canada, where some requirements (such as around data localisation) will even exceed provisions under GDPR, widely considered a high watermark for privacy when introduced last May.

Cyber security regulations have their turn

But don’t forget cyber security regulation. Our principal Alan Ligertwood expects the introduction of the Australian Prudential Regulation Authority’s new information security standard CPS 234 in July 2019 to have a significant impact.

CPS 234 applies to financial services companies and their suppliers and Alan predicts the standard’s shift to a “trust but verify” approach, in which policy and control frameworks are actually tested, could herald a broader shift to more substantive approach by regulators to oversight of regulatory and policy compliance.

There’s also a federal election in 2019. We’d be naïve not to expect jobs and national security to dominate the campaign, but the policy focus given to critical “new economy” issues like cyber security and privacy In the lead-up to the polls will be worth watching. In recent years cyber security as a portfolio has been shuffled around and dropped like a hot potato at ministerial level.

Will the Government that forms after the election – of whichever colour – show it more love and attention?

New age digital risks

At the very least, let’s hope cyber security agencies and services keep running. Ever dedicated, over the break Alan paid a visit to the National Institute of Standards and Technology’s website – the US standards body that creates the respected Cybersecurity Framework – only to find it unavailable due the US government shutdown.

“It didn’t quite ruin my holiday, but it did get me thinking about unintended consequences and third party risk. A squabble over border wall funding has resulted in a global cyber security resource being taken offline indefinitely.”

It points to a bigger issue. Third parties and supply chains, and poor governance over them, will again be a major contributor to security and privacy risk this year, reckons Principal Matt Smith.

“The problem is proving too hard for people to manage correctly. Even companies with budgets which extend to managing supplier risk are often not able to get it right – too many suppliers and not enough money or capacity to perform adequate assurance.”

If the growing use of third parties demands that businesses re-think security, our Senior Project Manager Mike Wood sees the same trend in cloud adoption.

“Cloud is the de-facto way of running technology for most businesses.  Many are still transitioning but have traditional security thinking still in place.  A cloud transition must come with a fully thought through security mindset.”

Mike’s expecting to see even stronger uptake of controls like Cloud Access Security Brokers in 2019.

But is this the silver bullet?

We wonder if growing interest in cyber risk insurance in 2019 could be the catalyst for uplifted controls and governance across the economy. After all, organisations will need to have the right controls and processes in place in order to qualify for insurance in line with underwriting requirements.

But questions linger over the maturity of these underwriting methodologies, Alan notes.

“Organisations themselves find it extremely difficult to quantify and adequately mitigate cyber threats, yet insurance companies sell policies to hedge against such an incident.”

The likely lesson here is for organisations not to treat cyber insurance as a silver bullet. Instead, do the hard yards and prioritise a risk-based approach built on strong executive sponsorship, effective governance, and actively engaging your people in the journey.

It’s all about trust

If there was a common theme in our team’s readings and reflections after the break, it was probably over the intricacies of trust in the digital age.

When the waves stopped breaking on Manly beach, Principal Peter Quigley spent time following the work of Renee DiResta, who has published insightful research into the use of disinformation and malign narratives in social media. There’s growing awareness of how digital platforms are being used to sow distrust in society. In a similar vein, Arjun has been studying the work of Peter Singer, whose research into how social media is being weaponised could have insights for organisations wanting to use social media to enhance trust, particularly in the wake of a breach.

Alistair notes how some technology companies have begun to prioritise digital wellbeing. For example, new features in Android and iOS that help users manage their screen time – and thus minimise harm – reflect the potential for a more trusting, collaborative digital ecosystem.

At the end of the day, much of our work as a team goes towards helping organisations mitigate digital risk in order to increase digital trust – among customers, staff and partners. The challenges are aplenty but exciting, and we look forward to working on them with many of you in 2019.

End of year wrap

The year started with a meltdown. Literally.

New Year’s Eve hangovers had barely cleared when security researchers announced they had discovered security flaws that would impact “virtually every user of a personal computer”. “Happy new year” to you too. Dubbed “Meltdown” and “Spectre”, the flaws in popular computer processors would allow hackers to access sensitive information from memory – certainly no small thing. Chipmakers urgently released updates. Users were urged to patch. Fortunately, the sky didn’t fall in.

If all this was meant to jolt us into taking notice of data security and privacy in 2018 … well, that seemed unnecessary. With formidable new data protection regulations coming into force, many organisations were already stepping into this year with a much sharper focus on digital risk.

The first of these new regulatory regimes took effect in February, when Australia finally introduced mandatory data breach reporting. Under the Notifiable Data Breaches (NDB) scheme, overseen by the Office of the Australian Information Commissioner, applicable organisations must now disclose any breaches of personal information likely to result in serious harm.

In May, the world also welcomed the EU’s General Data Protection Regulation (GDPR). Kind of hard to miss, with an onslaught of updated privacy policies flooding user inboxes from companies keen to show compliance.

The promise of GDPR is to increase consumers’ consent and control over their data and place a greater emphasis on transparency.  Its extra-territorial nature (GDPR applies to any organisation servicing customers based in Europe) meant companies all around the world worked fast to comply, updating privacy policies, implementing privacy by design and creating data breach response plans. A nice reward for these proactive companies was evidence that GDPR is emerging as a template for new privacy regulations around the world. GDPR-compliance gets you ahead of the game.

With these regimes in place, anticipation built around who would be first to test them out. For the local NDB scheme, the honour fell to PageUp. In May, the Australian HR service company detected an unknown attacker had gained access to job applicants’ personal details and usernames and passwords of PageUp employees.

It wasn’t the first breach reported under NDB but was arguably the first big one – not least because of who else it dragged into the fray. It was a veritable who’s who of big Aussie brands – Commonwealth Bank, Australia Post, Coles, Telstra and Jetstar, to name a few. For these PageUp clients, their own data had been caught up in a breach of a service provider, shining a bright light on what could be the security lesson of 2018: manage your supplier risks.

By July we were all bouncing off the walls. Commencement of the My Health Record (MHR) three month opt-out period heralded an almighty nationwide brouhaha. The scheme’s privacy provisions came under heavy fire, most particularly the fact the scheme was opt-out by default, loose provisions around law enforcement access to health records, and a lack of faith in how well-versed those accessing the records were in good privacy and security practices. Things unravelled so much that the Prime Minister had to step in, momentarily taking a break from more important national duties such as fighting those coming for his job.

Amendments to the MHR legislation were eventually passed (addressing some, but not all of these issues), but not before public trust in the project was severely tarnished. MHR stands as a stark lesson for any organisation delivering major projects and transformations – proactively managing the privacy and security risks is critical to success.

If not enough attention was given to data concerns in the design of MHR, security considerations thoroughly dominated the conversation about another national-level digital project – the build out of Australia’s 5G networks. After months of speculation, the Australian government in August banned Chinese telecommunications company Huawei from taking part in the 5G rollout, citing national security concerns. Despite multiple assurances from the company about its independence from the Chinese government and offers of greater oversight, Australia still said ‘no way’ to Huawei.

China responded frostily. Some now fear we’re in the early stages of a tech cold war in which retaliatory bans and invasive security provisions will be levelled at western businesses by China (where local cyber security laws should already be a concern for businesses with operations in China).

Putting aside the geopolitical ramifications, the sobering reminder for any business from the Huwaei ban is the heightened concern about supply chain risks. With supply chain attacks on the rise, managing vendor and third-party security risks requires the same energy as attending to risks in your own infrastructure.

Ask Facebook. A lax attitude towards its third-party partners brought the social media giant intense pain in 2018. The Cambridge Analytica scandal proved to be one of the most egregious misuses of data and abuses of user trust in recent memory, with the data of almost 90 million Facebook users harvested by a data mining company to influence elections. The global public reacted furiously. Many users would delete their Facebook accounts in anger. Schadenfreude enthusiasts had much to feast on when Facebook founder and CEO Mark Zuckerberg’s uncomfortably testified in front of the US Senate.

The social network would find itself under the pump on various privacy and security issues throughout 2018, including the millions of fake accounts on its platform, the high profile departure of security chief Alex Stamos and news of further data breaches.

But when it came to brands battling breaches, Facebook hardly went it alone in 2018. In the first full reporting quarter after the commencement of the NDB scheme, the OAIC received 242 data breach notifications, followed by 245 notifications for the subsequent quarter.

The scale of global data breaches has been eye-watering. Breaches involving Marriott International, Exactis, Aadhar and Quora all eclipsed 100 million affected customers.

With breaches on the rise, it becomes ever more important that businesses be well prepared to respond. The maxim that organisations will increasingly be judged not on the fact they had a breach, but on how they respond, grew strong legs this year.

But we needn’t succumb to defeatism. Passionate security and privacy communities continue to try to reduce the likelihood or impact of breaches and other cyber incidents. Technologies and solutions useful in mitigating common threats gained traction. For instance, multi-factor authentication had more moments in the sun this year, not least because we became more attuned to the flimsiness of relying on passwords alone (thanks Ye!). Security solutions supporting other key digital trends also continue to gain favour – tools like Cloud Access Security Brokers enjoyed strong momentum this year as businesses look to manage the risks of moving towards cloud.

Even finger-pointing was deployed in the fight against hackers. This year, the Australian government and its allies began to publicly attribute a number of major cyber campaigns to state-sponsored actors. A gentle step towards deterrence, the attributions signalled a more overt and more public pro-security posture from the Government. Regrettably, some of this good work may have been undone late in the year with the passage of an “encryption bill”, seen by many as weakening the security of the overall digital ecosystem and damaging to local technology companies.

In many ways, in 2018 we were given the chance to step into a more mature conversation about digital risk and the challenges of data protection, privacy and cyber security. Sensationalist FUD in earlier years about cyber-attacks or crippling GDPR compliance largely gave way to a more pragmatic acceptance of the likelihood of breaches, high public expectations and the need to be well prepared to respond and protect customers.

At a strategic level, a more mature and business-aligned approach is also evident. Both the Australian government and US governments introduced initiatives that emphasise the value of a risk-based approach to cyber security, which is also taking hold in the private sector. The discipline of cyber risk management is helping security executives better understand their security posture and have more engaging conversations with their boards.

All this progress, and we still have the grand promise that AI and blockchain will one day solve all our problems.  Maybe in 2019 ….

Till then, we wish you a happy festive season and a great new year.

From the team at elevenM.

APRA gets $60m in new funding: CPS 234 just got very real

We have previously talked about APRA’s new information security regulation and how global fines will influence the enforcement of this new regulation.

Today we saw a clear statement of intent from the government in the form of $58.7 million of new funding for APRA to focus on the identification of new and emerging risks such as cyber and fintech.

As previously stated, if you are in line of sight for CPS 234 either as a regulated entity or a supplier to one, we advise you to have a clear plan in place on how you will meet your obligations. No one wants to be the Tesco of Australia.

If you would like to talk to someone from elevenM about getting ready for CPS 234, please drop us a note at hello@elevenM.com.au or call us on 1300 003 922.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 3: Trust through reputational management

This is the third and final article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at the meaning and underlying principles of trust. Part two explored best practice approaches to using regulatory compliance to build trust.

In this piece, we look at the role of reputation management in building trust on privacy and security issues. 

Reputation management

The way an organisation manages its reputation is unsurprisingly tightly bound up with trust.

While there are many aspects to reputation management, an effective public response is one of, if not the most, critical requirements.

In the era of fast-paced digital media, a poorly managed communications response to a cyber or privacy incident can rapidly damage trust. With a vocal and influential community of highly informed security and privacy experts active on social media, corporate responses that don’t meet the mark get pulled apart very quickly.

Accordingly, a bad response produces significantly bad outcomes, including serious financial impacts, executive scalps, and broader repercussions like government and regulatory inquiries and class actions.

A google search will quickly uncover examples of organisations that mishandled their public response. Just in recent weeks we learned Uber will pay US $148m in fines over a 2016 breach, largely because of failures in how it went about disclosing the breach.

Typically, examples of poor public responses to breaches include one or more of the following characteristics:

  • The organisation was slow to reveal the incident to customers (ie. not prioritising truth, safety and reliability)
  • The organisation was legalistic or defensive (ie. not prioritising the protection of customers)
  • The organisation pointed the finger at others (ie. not prioritising reliability or accountability)
  • The organisation provided incorrect or inadequate technical details (ie. not prioritising a show of competence)

As we can see courtesy of the analyses in the brackets, the reason public responses often unravel as they do is that they feature statements that violate the key principles of trust that we outlined in part one of this series.

Achieving a high-quality, trust-building response that reflects and positively communicates principles of trust is not necessarily easy, especially in the intensity of managing an incident.

An organisation’s best chance of getting things right is to build communications plans in advance that embed the right messages and behaviours.

Plans and messages will always need to be adapted to suit specific incidents, of course, but this proactive approach allows organisation to develop a foundation of clear, trust-building messages in a calmer context.

It’s equally critical to run exercises and simulations around these plans, to ensure the key staff are aware of their roles and are aligned to the objectives of a good public crisis response and that hiccups are addressed before a real crisis occurs.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 2: Trust through regulatory compliance

This is the second article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at what trust means, and uncovered some guiding principles organisations can work towards when seeking to build trust.

In this piece, we look at best practice approaches to using regulatory compliance to build trust.

Privacy laws and regulatory guidance provide a pretty good framework for doing the right thing when it comes to trusted privacy practices (otherwise known as, the proper collection, use and disclosure of personal information).

We are the first to advocate for a compliance-based framework.  Every entity bound by the Privacy Act 1988 and equivalent laws should be taking proactive steps to establish and maintain internal practices, procedures and systems that ensure compliance with the Australian Privacy Principles.  They should be able to demonstrate appropriate accountabilities, governance and resourcing.

But compliance alone won’t build trust.

For one, the majority of Australian businesses are not bound by the Privacy Act because they fall under its $3m threshold. This is one of several reasons why Australian regulation is considered inadequate by EU data protection standards.

Secondly, there is variability in the ways that entities operationalise privacy. The regulator has published guidance and tooling for the public sector to help create some common benchmarks and uplift maturity recognising that some entities are applying the bare minimum. No such guidance exists for the private sector – yet.

Consumer expectations are also higher than the law. It may once have been acceptable for businesses to use and share data to suit their own purposes whilst burying their notices in screeds of legalise. However, the furore over Facebook Cambridge / Analytica shows that sentiment has changed (and also raises a whole bucket of governance issues).  Similarly, increasingly global consumers expect to be protected by the high standards set by the GDPR and other stringent frameworks wherever they are, which include rights such as the right to be forgotten and the right to data portability.

Lastly, current compliance frameworks do not help organisations to determine what is ethical when it comes to using and repurposing personal information. In short, an organisation can comply with the Privacy Act and still fall into an ethical hole with its data uses.

Your organisation should be thinking about its approach to building and protecting trust through privacy frameworks.  Start with compliance, then seek to bolster weak spots with an ethical framework; a statement of boundaries to which your organisation should adhere. 


In the third and final part of this series, we detail how an organisation’s approach to reputation management for privacy and cyber security issues can build or damage trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 1: Understanding trust

Join us for a three-part series that explores the notion of trust in today’s digital economy, and how organisations practically can build trust. We also focus on the role of regulatory compliance and reputation management in building trust, and outline best practice approaches.

Be-it users stepping away from the world’s biggest social media platform after repeated privacy scandals, a major airline’s share price plummeting after a large data breach, or Australia’s largest bank issuing a stronger commitment to a stronger focus on privacy and security in rebuilding its image – events in recent weeks provide a strong reminder of the fragility and critical importance of trust to businesses seeking success in the digital economy.

Bodies as illustrious as the World Economic Forum and OECD have written at length about the pivotal role of trust as a driving factor for success today.

But what does trust actually mean in the context of your organisation? And how do you practically go about building it?

At elevenM, we spend considerable time discussing and researching these questions from the perspectives of our skills and experiences across privacy, cyber security, risk, strategy and communications.

A good starting point for any organisation wanting to make trust a competitive differentiator is to gain a deeper understanding of what trust actually means, and specifically, what it means for it.

Trust is a layered concept, and different things are required in different contexts to build trust.

Some basic tenets of trust become obvious when we look to popular dictionaries. Ideas like safety, reliability, truth, competence and consistency stand out as fundamental principles.

Another way to learn what trust means in a practical sense is to look at why brands are trusted. For instance, the most recent Roy Morgan survey listed supermarket ALDI as the most trusted brand in Australia. Roy Morgan explains this is built on ALDI’s reputation for reliability and meeting customer needs.

Importantly, the dictionary definitions also emphasise an ethical aspect – trust is built by doing good and protecting customers from harm.

Digging a little deeper, we look to the work of trust expert and business lecturer Rachel Botsman, who describes trust as “a confident relationship with the unknown”.  This moves us into the digital space in which organisations operate today, and towards a more nuanced understanding.

We can infer that consumers want new digital experiences, and an important part of building trust is for organisations to innovate and help customers step into the novel and unknown, but with safety and confidence.

So, how do we implement these ideas about trust in a practical sense?

With these definitions in mind, organisations should ask themselves some practical and instructive questions that illuminate whether they are building trust.

  • Do customers feel their data is safe with you?
  • Can customers see that you seek to protect them from harm?
  • Are you accurate and transparent in your representations?
  • Do your behaviours, statements, products and services convey a sense of competence and consistency?
  • Do you meet expectations of your customers (and not just clear the bar set by regulators)?
  • Are you innovative and helping customers towards new experiences?

In part two of this series, we will explore how regulatory compliance can be used to build trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

What does the record FCA cyber fine mean for Australia?

First, bit of context: The Financial Conduct Authority (FCA) is the conduct and prudential regulator for financial services in the UK. They are in-part an equivalent to the Australian Prudential Regulatory Authority (APRA).

Record cyber related fine

This week the FCA handed down a record cyber related fine to the banking arm of the UK’s largest supermarket chain Tesco for failing to protect account holders from a “foreseeable” cyber attack two years ago. The fine totalled £23.4 million but due to an agreed early stage discount, the fine was reduced by 30% to £16.4 million.

Cyber attack?

It could be argued that this was not a cyber attack in that it was not a breach of Tesco Bank’s network or software but rather a new twist on good old card fraud. But for clarity, the FCA defined the attack which lead to this fine as: “a mass algorithmic fraud attack which affected Tesco Bank’s personal current account and debit card customers from 5 to 8 November 2016.”

What cyber rules did Tesco break?

Interestingly, the FCA does not have any cyber specific regulation. The FCA exercised powers through provisions published in their Handbook. This Handbook has Principles, which are general statements of the fundamental obligations. Therefore Tesco’s fine was issued against the comfortably generic Principle 2: “A firm must conduct its business with due skill, care and diligence”

What does this mean for Australian financial services?

APRA, you may recall from our previous blog. has issued a draft information security regulation CPS 243. This new regulation sets out clear rules on how regulated Australian institutions should be managing their cyber risk.

If we use the Tesco Bank incident as an example, here is how APRA could use CPS 234:

Information security capability: “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. –  Visa provided Tesco Bank with threat intelligence as Visa had noted this threat occurring in Brazil and the US.  Whilst Tesco Bank actioned this intelligence against its credit cards, it failed to do so against debit cards which netted the threat actors £2.26 million.

Incident management: “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”.  – The following incident management failings were noted by the FCA:

  • Tesco Bank’s Financial Crime Operations team failed to follow written procedures;
  • The Fraud Strategy Team drafted a rule to block the fraudulent transactions, but coded the rule incorrectly.
  • The Fraud Strategy Team failed to monitor the rule’s operation and did not discover until several hours later, that the rule was not working.
  • The responsible managers should have invoked crisis management procedures earlier.

Do we think APRA will be handing out fines this size?

Short answer, yes. Post the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, there is very little love for the financial services industry in Australia. Our sense is that politicians who want to remain politicians will need to be seen to be tough on financial services and therefore enforcement authorities like APRA will most likely see an increase in their budgets.

Unfortunately for those of you in cyber and risk teams in financial services, it is a bit of a perfect storm. The regulator has a new set of rules to enforce, the money to conduct the investigation and a precedence from within the Commonwealth.

What about the suppliers?

Something that not many are talking about but really should be, is the supplier landscape. Like it or not, the banks in Australia are some of the biggest businesses in the country. They use a lot of suppliers to deliver critical services including cyber security. Under the proposed APRA standard:

Implementation of controls: “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”.

Banks are now clearly accountable for the effectiveness of the information security controls operated by their suppliers as they relate to a bank’s defences. If you are a supplier (major or otherwise) to the banks, given this new level of oversight from their regulator, we advise you to get your house in order because it is likely that your door will be knocked upon soon.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.