Mr Dutton, we need help with supplier risk

When we speak with heads of cyber, risk and privacy, eventually there comes a point when brows become more furrowed and the conversation turns to suppliers and the risk they pose.

There are a couple of likely triggers. First, APRA’s new CPS 234 regulations require regulated entities to evaluate a supplier’s information security controls. Second, there’s heightened awareness now in the business community that many data breaches suffered by organisations are ultimately a result of the breach of a supplier.

The problem space

Organisations today use hundreds or even thousands of suppliers for a multitude of services. The data shared and access given to deliver those services is increasingly so extensive that it has blurred the boundaries between organisation and supplier. In many cases, the supplier’s risk is the organisation’s risk.

Gaining assurance over the risk posed by a large number of suppliers, without using up every dollar of budget allocated to the cyber team, is an increasingly difficult challenge.

Assurance

To appreciate the scope of the challenge, we first need to understand the concept of “assurance”, a term not always well understood outside the worlds of risk and assurance. So let’s take a moment to clarify, using DLP (Data Loss Prevention) as an example.

To gain assurance over a control you are required to evaluate the design and operating effectiveness of that control.  APRA’s new information security regulation CPS234 states that regulated entities require both when assessing the information security controls they rely upon to manage their risk, even if that control sits with a supplier. So what would that entail in this example?

  • Design effectiveness would be confirming that the DLP tool covered all information sources and potential exit points for your data. It would involve making sure data is marked and therefore could be monitored by the tool. Evidence of the control working would be kept.
  • Operating effectiveness would be the proof (using the evidence above) that the control has been running for the period of time that it was supposed to.

The unfortunate reality of assurance

In previous roles, members of our team have been part of designing and running market-leading supplier risk services. But these services never actually gave any assurance, unlike audit reports (eg. SOC2, ASAE etc). Supplier risk reports typically include a familiar caveat: “this report is not an audit and does not constitute assurance”.

This is because the supplier risk service that is delivered involves the consulting firm sending a supplier a spreadsheet, which the supplier fills in, prompting the consulting firm to ask for evidence to support the responses.

This process provides little insight as to the design or operating effectiveness of a control. If the worst case happens and a supplier is breached, the organisation will point to the consulting firm, and the consulting firm will point to that statement in the report that said the service they were providing did not constitute assurance.

We need your help, Mr Dutton

The reality is that every organisation getting actual assurance over every control at each of its suppliers is just not a feasible option.

We believe Australia needs a national scheme to manage supplier risk. A scheme in which baseline security controls are properly audited for their design and operating effectiveness, where assurance is gained and results are shared as needed. This would allow organisations to focus their cyber budget and energies on gaining assurance over the specific controls at suppliers that are unique to their service arrangement.

Last week, Home Affairs Minister Peter Dutton issued a discussion paper seeking input into the nation’s 2020 cyber security strategy. This is a great opportunity for industry to put forward the importance of a national and shared approach to managing supplier risk in this country. We will be putting forward this view, and some of the ideas in this post, in our response.

We encourage those of you struggling with supplier risk to do the same. If you would like to contribute to our response, please drop us a line here.

Up close and personal with the Singaporean Cybersecurity Act

Due to a recent engagement we carried out an in-depth review of the new Singaporean Cybersecurity Act.

What do we think?

The Act is a bold approach to ensuring the security of a nation’s most critical infrastructure, which we think will be copied by other countries and may even be a model for large enterprises.

Why bold?

A fundamental challenge is that the level of cybersecurity protecting any piece of infrastructure at any given time is usually heavily dependent on a Chief Information Security Officer’s (CISO) ability to present cyber risk to those controlling the purse strings. The result is a varied levels of control and capability across some very important infrastructure.

So what is the answer? Like most things, depends who you ask. Singapore has taken the bold approach to regulate the cybersecurity of the technology infrastructure that the country needs to run smoothly.

Our key takeaways

  • The Act introduces a Cyber Commissioner who will “respond to cybersecurity incidents that threaten the national security, defence, economy, foreign relations, public health, public order or public safety, or any essential services, of Singapore, whether such cybersecurity incidents occur in or outside Singapore” – Interesting to see how this works in practice. Many global companies in this framework will be hesitant to provide that level of access to a foreign state.
  • The Act creates Critical Information Infrastructure (CII) in Singapore meaning “the computer or computer system which is necessary for the continuous delivery of an essential service, and the loss or compromise of the computer or computer system will have a debilitating effect on the availability of the essential service in Singapore” – These CIIs span most industries across both the public and private sector. It will be very interesting to see what they determine to be CIIs and how private companies deal with this. Even from an investment perspective, who pays to increase the security posture or the rewrite of the supporting business processes?
  • Each designated CII will have an owner who will be appointed statutory duties specific to the cybersecurity of the CII. – Yeah, these owners will be held to account by the Commissioner. Failure to fulfil their role will result in personal fines up to $100,000 or imprisonment for a term not exceeding 2 years. Given most companies already struggle defining the ‘owner’ of a system, will this push the ownership of these business/operational systems to CISOs?
  • The Act introduces a licencing framework for suppliers where “No person is to provide licensable cybersecurity service without licence”. – A very interesting one. The suppliers of cybersecurity services to the CIIs will need to have a license issued by the Commissioner. A sign of things to come in the supplier risk space perhaps?

The Act can be found here:  Singapore Cybersecurity Act 2018


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 3: Trust through reputational management

This is the third and final article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at the meaning and underlying principles of trust. Part two explored best practice approaches to using regulatory compliance to build trust.

In this piece, we look at the role of reputation management in building trust on privacy and security issues. 

Reputation management

The way an organisation manages its reputation is unsurprisingly tightly bound up with trust.

While there are many aspects to reputation management, an effective public response is one of, if not the most, critical requirements.

In the era of fast-paced digital media, a poorly managed communications response to a cyber or privacy incident can rapidly damage trust. With a vocal and influential community of highly informed security and privacy experts active on social media, corporate responses that don’t meet the mark get pulled apart very quickly.

Accordingly, a bad response produces significantly bad outcomes, including serious financial impacts, executive scalps, and broader repercussions like government and regulatory inquiries and class actions.

A google search will quickly uncover examples of organisations that mishandled their public response. Just in recent weeks we learned Uber will pay US $148m in fines over a 2016 breach, largely because of failures in how it went about disclosing the breach.

Typically, examples of poor public responses to breaches include one or more of the following characteristics:

  • The organisation was slow to reveal the incident to customers (ie. not prioritising truth, safety and reliability)
  • The organisation was legalistic or defensive (ie. not prioritising the protection of customers)
  • The organisation pointed the finger at others (ie. not prioritising reliability or accountability)
  • The organisation provided incorrect or inadequate technical details (ie. not prioritising a show of competence)

As we can see courtesy of the analyses in the brackets, the reason public responses often unravel as they do is that they feature statements that violate the key principles of trust that we outlined in part one of this series.

Achieving a high-quality, trust-building response that reflects and positively communicates principles of trust is not necessarily easy, especially in the intensity of managing an incident.

An organisation’s best chance of getting things right is to build communications plans in advance that embed the right messages and behaviours.

Plans and messages will always need to be adapted to suit specific incidents, of course, but this proactive approach allows organisation to develop a foundation of clear, trust-building messages in a calmer context.

It’s equally critical to run exercises and simulations around these plans, to ensure the key staff are aware of their roles and are aligned to the objectives of a good public crisis response and that hiccups are addressed before a real crisis occurs.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 2: Trust through regulatory compliance

This is the second article in a three-part series that explores the notion of trust in today’s digital economy, and how organisations can practically build trust. In part 1 we took a deeper look at what trust means, and uncovered some guiding principles organisations can work towards when seeking to build trust.

In this piece, we look at best practice approaches to using regulatory compliance to build trust.

Privacy laws and regulatory guidance provide a pretty good framework for doing the right thing when it comes to trusted privacy practices (otherwise known as, the proper collection, use and disclosure of personal information).

We are the first to advocate for a compliance-based framework.  Every entity bound by the Privacy Act 1988 and equivalent laws should be taking proactive steps to establish and maintain internal practices, procedures and systems that ensure compliance with the Australian Privacy Principles.  They should be able to demonstrate appropriate accountabilities, governance and resourcing.

But compliance alone won’t build trust.

For one, the majority of Australian businesses are not bound by the Privacy Act because they fall under its $3m threshold. This is one of several reasons why Australian regulation is considered inadequate by EU data protection standards.

Secondly, there is variability in the ways that entities operationalise privacy. The regulator has published guidance and tooling for the public sector to help create some common benchmarks and uplift maturity recognising that some entities are applying the bare minimum. No such guidance exists for the private sector – yet.

Consumer expectations are also higher than the law. It may once have been acceptable for businesses to use and share data to suit their own purposes whilst burying their notices in screeds of legalise. However, the furore over Facebook Cambridge / Analytica shows that sentiment has changed (and also raises a whole bucket of governance issues).  Similarly, increasingly global consumers expect to be protected by the high standards set by the GDPR and other stringent frameworks wherever they are, which include rights such as the right to be forgotten and the right to data portability.

Lastly, current compliance frameworks do not help organisations to determine what is ethical when it comes to using and repurposing personal information. In short, an organisation can comply with the Privacy Act and still fall into an ethical hole with its data uses.

Your organisation should be thinking about its approach to building and protecting trust through privacy frameworks.  Start with compliance, then seek to bolster weak spots with an ethical framework; a statement of boundaries to which your organisation should adhere. 


In the third and final part of this series, we detail how an organisation’s approach to reputation management for privacy and cyber security issues can build or damage trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

The journey toward trust – Part 1: Understanding trust

Join us for a three-part series that explores the notion of trust in today’s digital economy, and how organisations practically can build trust. We also focus on the role of regulatory compliance and reputation management in building trust, and outline best practice approaches.

Be-it users stepping away from the world’s biggest social media platform after repeated privacy scandals, a major airline’s share price plummeting after a large data breach, or Australia’s largest bank issuing a stronger commitment to a stronger focus on privacy and security in rebuilding its image – events in recent weeks provide a strong reminder of the fragility and critical importance of trust to businesses seeking success in the digital economy.

Bodies as illustrious as the World Economic Forum and OECD have written at length about the pivotal role of trust as a driving factor for success today.

But what does trust actually mean in the context of your organisation? And how do you practically go about building it?

At elevenM, we spend considerable time discussing and researching these questions from the perspectives of our skills and experiences across privacy, cyber security, risk, strategy and communications.

A good starting point for any organisation wanting to make trust a competitive differentiator is to gain a deeper understanding of what trust actually means, and specifically, what it means for it.

Trust is a layered concept, and different things are required in different contexts to build trust.

Some basic tenets of trust become obvious when we look to popular dictionaries. Ideas like safety, reliability, truth, competence and consistency stand out as fundamental principles.

Another way to learn what trust means in a practical sense is to look at why brands are trusted. For instance, the most recent Roy Morgan survey listed supermarket ALDI as the most trusted brand in Australia. Roy Morgan explains this is built on ALDI’s reputation for reliability and meeting customer needs.

Importantly, the dictionary definitions also emphasise an ethical aspect – trust is built by doing good and protecting customers from harm.

Digging a little deeper, we look to the work of trust expert and business lecturer Rachel Botsman, who describes trust as “a confident relationship with the unknown”.  This moves us into the digital space in which organisations operate today, and towards a more nuanced understanding.

We can infer that consumers want new digital experiences, and an important part of building trust is for organisations to innovate and help customers step into the novel and unknown, but with safety and confidence.

So, how do we implement these ideas about trust in a practical sense?

With these definitions in mind, organisations should ask themselves some practical and instructive questions that illuminate whether they are building trust.

  • Do customers feel their data is safe with you?
  • Can customers see that you seek to protect them from harm?
  • Are you accurate and transparent in your representations?
  • Do your behaviours, statements, products and services convey a sense of competence and consistency?
  • Do you meet expectations of your customers (and not just clear the bar set by regulators)?
  • Are you innovative and helping customers towards new experiences?

In part two of this series, we will explore how regulatory compliance can be used to build trust.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

What does the record FCA cyber fine mean for Australia?

First, bit of context: The Financial Conduct Authority (FCA) is the conduct and prudential regulator for financial services in the UK. They are in-part an equivalent to the Australian Prudential Regulatory Authority (APRA).

Record cyber related fine

This week the FCA handed down a record cyber related fine to the banking arm of the UK’s largest supermarket chain Tesco for failing to protect account holders from a “foreseeable” cyber attack two years ago. The fine totalled £23.4 million but due to an agreed early stage discount, the fine was reduced by 30% to £16.4 million.

Cyber attack?

It could be argued that this was not a cyber attack in that it was not a breach of Tesco Bank’s network or software but rather a new twist on good old card fraud. But for clarity, the FCA defined the attack which lead to this fine as: “a mass algorithmic fraud attack which affected Tesco Bank’s personal current account and debit card customers from 5 to 8 November 2016.”

What cyber rules did Tesco break?

Interestingly, the FCA does not have any cyber specific regulation. The FCA exercised powers through provisions published in their Handbook. This Handbook has Principles, which are general statements of the fundamental obligations. Therefore Tesco’s fine was issued against the comfortably generic Principle 2: “A firm must conduct its business with due skill, care and diligence”

What does this mean for Australian financial services?

APRA, you may recall from our previous blog. has issued a draft information security regulation CPS 243. This new regulation sets out clear rules on how regulated Australian institutions should be managing their cyber risk.

If we use the Tesco Bank incident as an example, here is how APRA could use CPS 234:

Information security capability: “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. –  Visa provided Tesco Bank with threat intelligence as Visa had noted this threat occurring in Brazil and the US.  Whilst Tesco Bank actioned this intelligence against its credit cards, it failed to do so against debit cards which netted the threat actors £2.26 million.

Incident management: “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”.  – The following incident management failings were noted by the FCA:

  • Tesco Bank’s Financial Crime Operations team failed to follow written procedures;
  • The Fraud Strategy Team drafted a rule to block the fraudulent transactions, but coded the rule incorrectly.
  • The Fraud Strategy Team failed to monitor the rule’s operation and did not discover until several hours later, that the rule was not working.
  • The responsible managers should have invoked crisis management procedures earlier.

Do we think APRA will be handing out fines this size?

Short answer, yes. Post the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, there is very little love for the financial services industry in Australia. Our sense is that politicians who want to remain politicians will need to be seen to be tough on financial services and therefore enforcement authorities like APRA will most likely see an increase in their budgets.

Unfortunately for those of you in cyber and risk teams in financial services, it is a bit of a perfect storm. The regulator has a new set of rules to enforce, the money to conduct the investigation and a precedence from within the Commonwealth.

What about the suppliers?

Something that not many are talking about but really should be, is the supplier landscape. Like it or not, the banks in Australia are some of the biggest businesses in the country. They use a lot of suppliers to deliver critical services including cyber security. Under the proposed APRA standard:

Implementation of controls: “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”.

Banks are now clearly accountable for the effectiveness of the information security controls operated by their suppliers as they relate to a bank’s defences. If you are a supplier (major or otherwise) to the banks, given this new level of oversight from their regulator, we advise you to get your house in order because it is likely that your door will be knocked upon soon.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Nine steps to a successful privacy and cyber security capability uplift

Most organisations today understand the critical importance of cyber security and privacy protection to their business. Many are commencing major uplift programs, or at least considering how they should get started.

These projects inevitably carry high expectations because of what’s at stake. They’re also inherently complex and impact many parts of the organisation. Converting the effort and funding that goes into these projects into success and sustained improvement to business-as-usual practices is rarely straightforward.

Drawing on our collective experiences working on significant cyber security and privacy uplift programs across the globe, in a variety of industries, here’s what we believe are key elements to success.

1. Secure a clear executive mandate

Your uplift program is dealing with critical risks to your organisation. The changes you will seek to drive through these programs will require cooperation across may parts of your organisation and potentially partners and third parties too. A mandate and sponsorship from your executive is critical.

Think strategically about who else you need on-side, beyond your board and executive committee. Build an influence map and identify potential enablers and detractors, and engage early. Empower your program leadership team and business leadership from affected areas to make timely decisions and deliver their mandate.

2. Adopt a customer and human-centric approach

Uplift programs need to focus on people change as well as changes to processes and technology. Success in this space very often comes down to changing behaviours and ensuring the organisation has sufficient capacity to manage the new technology and process outputs (eg how to deal with incidents).

We therefore suggest that you adopt a customer and human-centric approach. Give serious time, attention and resourcing to areas including communications planning, organisational change management, stakeholder engagement, training and awareness.

3. Know the business value of what you are going to deliver and articulate it

An opaque or misaligned understanding of what a security or privacy program is meant to deliver is often the source of its undoing. It is crucial to ensure scope is clear and aligned to the executive mandate.

Define the value and benefits of your uplift program early, communicate them appropriately and find a way to demonstrate this value over time. Be sure to speak in terms the business understands, not just new technologies or capabilities you will roll-out for instance, what risks have you mitigated?

You can’t afford to be shy. Ramp up the PR to build recognition about your program and its value among staff, executive and board members. Think about branding.

4. Prioritise the foundational elements

If you’re in an organisation where security and privacy risks have been neglected, but now have a mandate for broad change, you can fall into the trap of trying to do too much at once.

Think of this as being your opportunity to get the groundwork in place for your future vision. Regardless of whether the foundational elements are technology or process related, most with tenure in your organisation know which of them need work. From our experience, those same people will also understand the importance of getting them right and in most cases would be willing to help you fix them.

As a friendly warning, don’t be lured down the path of purchasing expensive solutions without having the right groundwork in place. Most, if not all of these solutions rely on such foundations.

5. Deliver your uplift as a program

For the best results, deliver your uplift as a dedicated change program rather than through BAU.

Your program will of course need to work closely with BAU teams to ensure the sustained success of the program. Have clear and agreed criteria with those teams on the transition to BAU. Monitor BAU teams’ preparation and readiness as part of your program.

6. Introduce an efficient governance and decision making process

Robust and disciplined governance is critical. Involve key stakeholders, implement clear KPIs and methods of measurement, and create an efficient and responsive decision-making process to drive your program.

Governance can be light touch provided the right people are involved and the executive supports them. Ensure you limit the involvement of “passengers” on steering groups who aren’t able to contribute and make sure representatives from BAU are included

7. Have a ruthless focus on your strategic priorities

These programs operate in the context of a fast-moving threat and regulatory landscape. Things change rapidly and there will be unforeseen challenges.

It’s important to be brave and assured in holding to your strategic priorities. Avoid temptation to succumb to tactical “quick fixes” that solve short-term problems but bring long-term pain.

8. Build a high-performance culture and mindset for those delivering the program

These programs are hard but can be immensely satisfying and career-defining for those involved. Investing in the positivity, pride and engagement of your delivery team will pay immense dividends.

Seek to foster a high-performance culture, enthusiasm, tolerance and collaboration. Create an environment that is accepting of creativity and experimentation.

9. Be cognisant of the skills shortage and plan accordingly

While your project may be well funded, don’t be complacent about the difficulties accessing skilled people to achieve the goals of your project. Globally, the security and privacy industries continue to suffer severe short-ages in skilled professionals. Build these into your forecasts and expectations, and think laterally about the use of partners.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.

Our view on APRA’s new information security regulation

For those of you who don’t work in financial services and may not know the structure associated with APRA’s publications, there are Prudential Practice Guides (PPGs) and Prudential Standards (APSs or CPSs). A PPG provides guidance on what APRA considers to be sound practice in particular areas. PPGs discuss legal requirements but are not themselves legal requirements. Simply put, this is APRA telling you what you should be doing without making it enforceable.

On the other hand, APSs and CPSs are regulatory instruments and are therefore enforceable.

Until now, those working within a cyber security team at an Australian financial services company had PPG 234 – Management of security risk in information and information technology (released in 1 February 2010) as their only reference point as to what APRA were expecting from them in regard to their cyber security controls. But things have moved on a fair bit since 2010. Don’t get us wrong, PPG 234 is still used today as the basis for many ‘robust’ conversations with APRA.

APRA’s announcement

That leads us to the Insurance Council of Australia’s Annual Forum on 7th March 2018. It was at this esteemed event that APRA Executive Board Member Geoff Summerhayes delivered a speech which noted:

“APRA views cyber risk as an increasingly serious prudential threat to Australian financial institutions. To put it bluntly, it is easy to envisage a scenario in which a cyber breach could potentially damage an entity so badly that it is forced out of business.

“….What I’d like to address today is APRA’s view on the extent to which the defences of the entities we regulate, including insurers, are up to the task of keeping online adversaries at bay, as well as responding rapidly and effectively when – and I use that word intentionally – a breach is detected”

Summerhayes then went on to announce the release of the consultation draft of CPS 234 – Information Security. Yeah, actual legislative requirements on information security.

So what does it say?

Overall there are a lot of similarities to PPG 234 but the ones that caught our eye based upon our experience working within financial services were:

Roles and responsibilities

  • “The Board of an APRA-regulated entity (the Board) is ultimately responsible for ensuring that the entity maintains the information security of its information assets in a manner which is commensurate with the size and extent of threats to those assets, and which enables the continued sound operation of the entity”. – Interesting stake in the ground from APRA that Boards need to be clear on how they are managing information security risks. The next obvious question is what reporting will the Board need from management for them to discharge those duties?

Information security capability

  • “An APRA-regulated entity must actively maintain its information security capability with respect to changes in vulnerabilities and threats, including those resulting from changes to information assets or its business environment”. – Very interesting. There is a lot in this provision. First, there is a push to a threat based model, which we fully endorse (see our recent blogpost: 8 steps to a threat based defence model). Next, there is a requirement to have close enough control of your information assets to determine if changes to those assets somehow adjust your threat profile. Definitely one to watch. That brings us nicely to the following:

Information asset identification and classification

  • “An APRA-regulated entity must classify its information assets, including those managed by related parties and third parties, by criticality and sensitivity. Criticality and sensitivity is the degree to which an information security incident affecting that information asset has the potential to affect, financially or non-financially, the entity or the interests of depositors, policyholders, beneficiaries, or other customers”. – This really is a tough one. From our experience, many companies say they have a handle on this for their structured data with plans in place to address their unstructured data. In our experience however, very few actually do anything that would stand up to scrutiny.

Implementation of controls

  • “An APRA-regulated entity must have information security controls to protect its information assets, including those managed by related parties and third parties, that are implemented in a timely manner”. – Coming back to the previous point, there is now a requirement to have a clear line of sight of the sensitivity of data, this just adds to the requirement to build effective control over that data.
  • “Where information assets are managed by a related party or third party, an APRA-regulated entity must evaluate the design and operating effectiveness of that party’s information security controls”. – Third party security assurance is no longer a nice to have folks! Third party risk is referenced a couple of times in the draft, and so definitely seems to be a focus point. This will be very interesting as many companies struggle getting to grips with this risk. The dynamic of having to face into actual regulatory obligations however, is a very different proposition.

Incident management

  • “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner. An APRA-regulated entity must maintain plans to respond to information security incidents that the entity considers could plausibly occur (information security response plans)”. – We love this section. A very important capability that often gets deprioritised when the dollars are being allocated. Whilst the very large banks do have mature capabilities, most do not. Pulling the ‘Banks’ industry benchmark data from our NIST maturity tool we see that for the NIST domain Respond, the industry average is sitting at 2.39. So in maturity terms it is slightly above Level 2 – Repeatable, where the process is documented such that repeating the same steps may be attempted. In short, many have a lot to do in this space.

Testing control effectiveness

  • “An APRA-regulated entity must escalate and report to the Board or senior management any testing results that identify information security control deficiencies that cannot be remediated in a timely manner, to enable an assessment and potential response by the Board or senior management to mitigate the exposure, as appropriate”. – Yep, we also love this. Putting formal requirements around the basic principle of ‘fix what you find’! The key message from us to Boards and senior management is make sure you are clear on what is in/out of scope for this testing and why.
  • “Testing must be conducted by appropriately skilled and functionally independent specialists”.- The Big 4 audit firms will be very excited about this one!

APRA Notification

  • “An APRA-regulated entity must notify APRA as soon as possible, and no later than 24 hours, after experiencing an information security incident”. –  Eagle-eyed readers will spot that this reflects mandatory data breach obligations that recently came into force under the Privacy Act on 22 February. The Privacy Act requires entities that experience a serious breach involving personal information, to notify the OAIC and affected individuals ‘as soon as practicable’ after identifying the breach. Another example of how companies now have to contend with notifying multiple regulators, on different time-frames. 

Conclusion

CPS 234 is just a draft, and ultimately the final product may be vastly different. Nevertheless, we feel APRA’s approach is a positive step to drive awareness of this significant risk, and one which will hopefully be used to baseline the foundational cyber security capabilities noted within. Well done, APRA!

Consultation on the package is open until 7 June 2018. APRA intends to finalise the proposed standard towards the end of the year, with a view to implementing CPS 234 from 1 July 2019.

Link to the consultation draft.


If you enjoyed this and would like to be notified of future elevenM blog posts, please subscribe below.