News round-up March 2021 — That horrible Exchange compromise, IOT security threats made real and digital platforms’ latest privacy challenges

Helping your business stay abreast and make sense of the critical stories in digital risk, cyber security and privacy. Email news@elevenM.com.au to subscribe.

The round-up

“But has the horse has already bolted?” That’s the question senior US officials want companies who’ve applied patches for the highly publicised Microsoft Exchange security breach to ask themselves. The ugly Exchange Server compromise headlines our round-up, which also features an IoT breach that snared businesses across a range of industries, and the latest ransomware tactics.

Key articles

Thousands of Exchange servers breached prior to patching, CISA boss says

Summary: Four previously unidentified vulnerabilities in the Microsoft Exchange Server have been exploited by state-sponsored actors operating out of China, with some reports citing as many as 60,000 organisations affected.

Key risk takeaway: Being patched against these vulnerabilities might be giving system administrators a false sense of confidence. Having observed numerous concerted attempts to exploit the flaws, US officials are urging companies to take aggressive action to investigate and remediate compromises that may already have occurred (before patching). Accordingly, in addition to moving fast to release patches, Microsoft has published detailed guidance on its website on how to investigate and remediate the vulnerabilities, and even developed a “one-click mitigation tool” for organisations with smaller or less-resourced security teams. To learn more about how to develop a comprehensive vulnerability management program to drive timely remediation of dangerous security flaws (noting once again that patching alone may be insufficient in Exchange incident), check out our recent blog series here.

#vulnerabilitymanagement #statesponsoredattack


Directors must face cyber risks

Summary: Directors of public firms are expected to soon face greater accountability from cyber risks under the Government’s cyber strategy.

Key risk takeaway: Lack of preparation for cyber risks by boards may soon be punishable, as the Government seeks to make changes to directors’ duties in the second half of 2021. The Government is light on details but has cited preventing customer credentials from ending up on the dark web as a potential example of these new obligations. The introduction of these obligations follows the imposition of director duties on directors of financial institutions by APRA’s Prudential Standard 234. The moves are also part of a broader push for the Defence Department to take more forceful steps to “step in and protect” critical infrastructure companies, even if they are in the private sector.

#cyber #APRA #regulations


Hackers say they’ve gained access to surveillance cameras in Australian childcare centres, schools and aged care

Summary: Hacktivists gained access to approximately 150,000 Verkada surveillance cameras around the world after finding the username and password for an administrator account publicly exposed on the internet.

Key risk takeaway: This incident is not only a concrete example of oft-described potential security risks of IOT (not to mention the implications of poor password management). It also highlights that risks and impacts from these devices may be felt differently across a variety of sectors. For example, uncomfortable regulatory conversations could arise for some of Verkada’s clients (which include childcare centres and aged-care facilities), given the cameras have built-in facial recognition technology and can be placed in sensitive locations. This incident also highlights ongoing challenges for organisations in achieving effective security assurance over their supply chains, especially cloud-based suppliers.

#cybersecurity #IOT #suppliersecurity


Universal Health Services reports $67 million in losses after apparent ransomware attack

Summary: Universal Health Services (UHS) has reported losing US$67 million from the September ransomware attack that affected a large range of systems.

Key risk summary: The serious financial implications of ransomware continue to be apparent, with UHS’ heavy losses comprising both lost revenue and increased labour costs. Meanwhile Finnish psychology service Vastaamo, whose ransomware challenges we described in October, has now filed for bankruptcy. In a mark of how lucrative ransomware has become,  ransomware operators reportedly pulled in $370 million in profits last year. Still, techniques continue to evolve. Researchers recently observed attackers breaching ‘hypervisor servers’ (which organisations use to manage virtual machines). Doing this allows attackers to encrypt all virtual machines on a system, increasing pressure on victim organisations to pay a ransom. In the face of the continued evolution of ransomware, Australia’s Federal Labor Opposition has now called for a national ransomware strategy comprising a variety of measures including regulations, law enforcement, sanctions, diplomacy, and offensive cyber operations. Some of the thinking in the strategy – e.g. around enforcement and sanctions – also aligns with recent expert calls for a global effort to create a new international collaboration model to tackle ransomware.

#ransomware #cybersecurity #costofdatabreach


WhatsApp tries again to explain what data it shares with Facebook and why

Summary: WhatsApp deferred the introduction of new privacy terms in order to buy time to better explain the change.

Key risk takeaway: This is one of many recent examples that show us it is no longer sufficient for online services to have a “take it or leave it” attitude in their privacy terms. Having first taken such an approach with its revised privacy terms, WhatsApp had to scramble to explain the changes after “tens of millions of WhatsApp users started exploring alternatives, such as Signal and Telegram”. More broadly, a recent New York Times editorial also argued that current consent models and the default practice requiring consumers to opt-out of data collection practices undermines privacy and must change. In our recent blog post we explore in detail the adequacy of current approaches to consent, which is being examined under the current review of the Australian Privacy Act.

#privacy #consent


TikTok reaches $92 million settlement over nationwide privacy lawsuit

Summary: TikTok agreed to settle 21 combined class-action lawsuits over invasion of privacy for US $92million.

Key risk takeaway: Disregarding appropriate privacy measures will have financial consequences – whether that’s through regulatory fines, legal settlements (as is the case here) or the long-term erosion of user trust. Complaints from the lawsuits against TikTok alleged a range of issues, from using facial analysis to determine users’ ethnicity, gender, and age to illegal transmissions of private data. And just as TikTok said it didn’t want to take the time to litigate the complaints, it was also rated one of the least trusted digital platforms. Privacy responsiveness and social responsibility from digital platforms are fast becoming market differentiators, with 62% of Americans saying search and social media companies need more regulation.

#privacy #transparency #trust

Privacy in focus: The consent catch-22

In this post from our ‘Privacy in focus’ blog series we discuss notice and consent — key cornerstones of privacy regulation both in Australia and around the globe — and key challenges in how these concepts operate to protect privacy.

From the 22 questions on notice, consent, and use and disclosure in the Privacy Act issues paper, there is one underlying question: Who should bear responsibility for safeguarding individuals’ privacy?

Patch me if you can: the six steps of vulnerability management

This is the second post in a three-part series on vulnerability management. In this post, elevenM’s Theo Schreuder describes the six steps of a vulnerability management program.

In the first post of this series, we explored why vulnerability management is important and looked at key considerations for setting up a vulnerability management program for success. In this post, we’ll step you through the six steps of vulnerability management.


The six steps of vulnerability management

The six steps of vulnerability management [Source: CDC]

Let’s explore each step in more detail.

  1. Discover vulnerabilities

The most efficient way to discover vulnerabilities is to use a centralised and dedicated tool (for example, Rapid7 InsightVM, Tenable, Qualys) that regularly scans assets (devices, servers, internet connected things) for published vulnerabilities. Information about published vulnerabilities can be obtained from official sources such as the US-based National Vulnerability Database (NVD), via alerts from your Security Operations Centre (SOC) or from external advisories.

Running scans on a regular basis ensures you have continuous visibility of vulnerabilities in your network.

 

2. Prioritise assets

Prioritising assets allows you to determine which remediation actions to focus on first to reduce the greatest amount of risk within the shortest time and with least budget.

Prioritisation of assets relies on having a well-maintained asset inventory (e.g. a Central Management Database or CMDB) and a list of the critical “crown jewel” assets and applications from a business point of view (for example, payroll systems are typically considered critical assets). Another factor to consider in determining prioritisation is the exposure of an asset to the perimeter of the network, and how many “hops” the asset is from an internet-facing device.

 

3. Assess vulnerability severity

After devices are scanned, discovered vulnerabilities are usually be assigned a severity score based on industry standards such as the Common Vulnerability Scoring System (CVSS) as well as custom calculations that — depending on the scanning tool — take into account factors including the ease of exploitability and the number of known exploit kits and plug-and-play malware kits available to exploit that vulnerability. This step can also involve verifying that the discovered vulnerability is not a false positive, and does in fact exist on the asset.

 

4. Reporting

When creating reports on vulnerability risk, it’s important to consider different levels of reporting to suit the needs of different audiences. Your reporting levels could include:

  1. Executive level reporting
    This level of reporting focuses on grand totals of discovered vulnerabilities and vulnerable assets, total critical vulnerabilities, and historical trends over time. The aim is to provide senior executives with a straightforward view of vulnerabilities in the network and trends.
  2. Management level reporting
    For individual managers and teams to manage their remediation work, it helps to provide them with a lower-level summary of only the assets they are responsible for. This report will have more detail than an executive level report, and should provide the ability to drill down and identify the most vulnerable assets and critical vulnerabilities where remediation work should begin.
  3. Support team level reporting
    This is the highest resolution report, providing detail for each vulnerability finding on each asset that a support team is responsible for. Depending on the organisation and the way patching responsibilities are divided, splitting out reporting between operating systems (below base) and application level (above base) can also be advantageous as remediation processes for these levels can differ.
A sample management-level vulnerability report generated using Tableau

 

5. Remediate vulnerabilities

“The easiest way to get rid of all of your vulnerabilities is to simply turn off all of your devices!”

– origin unknown

Remediation can take a variety of forms including but not limited to changing configuration files, applying a suggested patch from the scanning tool or even uninstalling the vulnerable program entirely.

There may be also be legitimate cases where a vulnerability may be exempted from remediation. Factors could include:

  • Is the asset soon to be decommissioned or nearing end-of-life?
  • Is it prohibitively expensive to upgrade to the newest secure version of the software?
  • Are there other mitigating controls in place (e.g. air-gapping, firewall rules)
  • Will the required work impact revenue by reducing service availability?

 

6. Verify remediation

Are we done yet? Not quite.

It doesn’t help if — after your support teams have done all this wonderful work — your vulnerability scanning tool is still reporting that the asset is vulnerable. Therefore, it is very important that once remediation work is complete you verify that the vulnerability is no longer being detected.

Stay tuned for the third and final post in the series, in which we discuss common challenges and considerations for a well-functioning vulnerability management program.


Read all posts in the series:
Patch me if you can: the importance of vulnerability management
Patch me if you can: the six steps of vulnerability management
Patch me if you can: key challenges and considerations

 

Privacy in focus: What’s in a word?

In this post from our ‘Privacy in focus’ blog series, we explore arguments for and against changes to the definition of personal information being considered by the review of the Privacy Act, and the implications of those changes.

One of the simplest but most far-reaching potential amendments to the Privacy Act is the replacement of a single word: replacing ‘about’ with ‘relates to’ in the definition of ‘personal information’.

Supporters of the change (such as the ACCC, the OAIC, and the Law Council of Australia) say it would clarify significant legal uncertainty, while also aligning Australia with the GDPR standard and maintaining consistency between the Privacy Act and the Consumer Data Right regime.

Those opposed (such as the Communications Alliance and the Australian Industry Group) warn that the change may unnecessarily broaden the scope of the Act, potentially imposing substantial costs on industry without any clear benefit to consumers.

To understand why, we’ll dig into the origins of the definition and the present uncertainty regarding its application.

Precision is important

The definition of personal information sets the scope of the Privacy Act. All the rights and obligations in the Act rely on this definition. All the obligations that organisations have to handle personal information responsibly rely on this definition. All the rights that individuals have to control how their personal information is used rely on this definition.  Personal information is the very base on which privacy regulation rests.

Any uncertainty in such an important definition can result in significant costs for both individuals and organisations. At best, uncertainty can result in wasted compliance work governing and controlling data that need not be protected. At worst, it can mean severe violations of privacy for consumers when data breaches occur as a result of failure to apply controls to data that should have been protected. Examples of the former are frequent — even OAIC guidance encourages organisations to err on the side of caution in identifying data as personal information. Unfortunately, examples of the latter are even more commonplace — the disclosure of Myki travel data by Public Transport Victoria, the publication of MBS/PBS data by the Federal Department of Health, and Flight Centre’s release of customer data for a hackathon are all recent examples of organisations releasing data subject to inadequate controls in the belief that it did not amount to personal information.

These uncertain times

According to the OAIC, the ACCC, and many others, there is substantial uncertainty as to the scope of ‘personal information’, particularly as it relates to metadata such as IP addresses and other technical information. That uncertainty was partially created, and certainly enhanced, by the decision of the Administrative Appeal Tribunal in the Grubb case, which was upheld on appeal in the Federal Court.

In the Grubb case, the Tribunal found that certain telecommunications metadata was not personal information because it was really ‘about’ the way data flows through Telstra’s network in order to deliver a call or message, rather than about Mr Grubb himself.

The ruling came as a surprise to many. The orthodoxy up until that point had been that the word ‘about’ played a minimal role in the definition of personal information, and that the relevant test was simply whether the information is connected or related to an individual in a way that reveals or conveys something about them, even where the information may be several steps removed from the individual.

Today, it’s still unclear how significant a role ‘about’ should play in the definition. Could one argue, for example, that location data from a mobile phone is information about the phone, not its owner? Or that web browsing history is information about data flows and connections between computers, rather than about the individual at the keyboard?

OAIC guidance is some help, but it’s not legally binding. In the absence of further consideration by the courts, which is unlikely to happen any time soon[1], the matter remains unsettled. Organisations are without a clear answer as to whether (or in what circumstances) technical data should be treated as personal, forcing them to roll the dice in an area that should be precisely defined. Individuals are put in the equally uncertain position of not knowing what information will be protected, and how far to trust organisations who may be trying to do the right thing.  

Relating to uncertainty

Those in favour of reform want to resolve this uncertainty by replacing ‘about’ with ‘relates to’. The effect would be to sidestep the Grubb judgement and lock in a broad understanding of what personal information entails, so that the definition covers (and the Privacy Act protects) all information that reveals or conveys something about an individual, including device or technical data that may be generated at a remove.

Those who prefer the status quo take the view the present level of uncertainty is manageable, and that revising the definition to something new and untested in Australia may lead to more confusion rather than less. Additionally, there is concern that ‘relates to’ may represent a broader test, and that the change could mean a significant expansion of the scope of the Act into technical and operational data sets.

What we think

By drawing attention to ‘about’ as a separate test, the Grubb case has led to an unfortunate focus on how information is generated and its proximity to an individual, when the key concern of privacy should always be what is revealed or conveyed about a person. In our view, replacing ‘about’ with ‘relates to’ better focuses consideration on whether an identifiable individual may be affected.

Industry concerns about expanding the scope of the Act are reasonable, particularly in the telco space, though we anticipate this to be modest and manageable as the scope of personal information will always remain bounded by the primary requirement that personal information be linked back to an identifiable individual. Further, we anticipate that any additional compliance costs will be offset by a clearer test and better alignment with the Consumer Data Right and Telecommunications (Interception and Access) Act, both of which use ‘relates to’ in defining personal information.

Finally and significantly for any businesses operating outside of Australia, amending ‘about’ to ‘relates to’ would align the Privacy Act more closely with GDPR. Aligning with GDPR will be something of a recurring theme in any discussions about the Privacy Act review. This is for two reasons:

  • GDPR is an attractive standard. GDPR has come to represent the de-facto global standard with which many Australian and most international enterprises already comply. It’s far from perfect, and there are plenty of adaptations we might want to make for an Australian environment, but generally aligning to that standard could achieve a high level of privacy protection while minimising additional compliance costs for business.
  • Alignment might lead to ‘adequacy’. The GDPR imposes fewer requirements on data transfers to jurisdictions that the EU determine to have ‘adequate’ privacy laws. A determination of adequacy would substantially lower transaction and compliance costs for Australian companies doing business with the EU.

Click ‘I agree’ to continue

In our next edition of the Privacy in Focus series, we’ll take a look at consent and the role it might play in a revised Privacy Act. Will Australia double down on privacy self-management, or join the global trend towards greater organisational accountability?

Footnote: [1] Because of the way that privacy complaints work, disputes about the Privacy Act very rarely make it before the courts — a fact we’ll dig into more when we cover the proposal for a direct right of action under the Act.


Read all posts from the Privacy in focus series:
Privacy in focus: A new beginning
Privacy in focus: Who’s in the room?
Privacy in focus: What’s in a word?
Privacy in focus: The consent catch-22

Standards – huhh! – what are they good for?

elevenM’s Cassie Findlay looks at getting the most out of standards. Cassie is a current member of the Standards Australia Committee on Records Management and a former member of the International Organization for Standardization (ISO) Technical Committee on Records Management. She was lead author of the current edition of the International Standard on records management, ISO 15489. 

“Standards are like toothbrushes. Everyone thinks they’re a good idea, but no one wants to use someone else’s.”

(origin unknown) 

Why pay attention to standards, national or international? Aren’t they just for making sure train tracks in different states are the same gauge? What do they have to do with managing and securing information or with privacy? Do we need standards? 

The value of standards for manufacturing or product safety is clear and easy to grasp.  

However for areas like privacy, recordkeeping and information security, with all their contingencies, the question arises as to how we can standardise when so often the answer to questions about what to do is ‘it depends’. 

The answer lies in what you seek to standardise, and indeed what type of standards products you set out to create. 

Of the domains elevenM works in, it could be argued that cyber security and information security have the clearest use cases for standardisation. The ISO 27001 set of standards have a huge profile and wide uptake, and have become embedded in contracts and requirements for doing business internationally. By meeting the requirements for a robust information security management system (ISMS) organisations can signal the readiness of their security capability to the market and to business partners. However this is a domain in which standards have proliferated, particular in cyber security. This was a driver for the work of the NSW Government-sponsored Cyber Security Standards Harmonisation Taskforce, led by AustCyber and Standards Australia, which recently released a report containing a range of recommendations for cyber security standards harmonisation and simplification. 

In the world of information management, specifically recordkeeping, strong work has been underway over the last couple of decades to codify and standardise approaches to building recordkeeping systems, tools and processes, in the form of the International Standard ISO 15489 Records Management and its predecessors. In the case of this standard, the recordkeeping profession is not seeking to establish a minimum set of compliance requirements, but rather to describe the optimal approach to building and maintaining key recordkeeping controls and processes, including the work of determining what records to make and keep, and ensuring that recordkeeping is a business enabler – whatever your business. The standard takes a ‘digital first’ approach and supports the work of building good recordkeeping frameworks regardless of format. Complementary to ISO 15489, the ISO 30300 Management systems for records suite offers compliance-focused standards that enable organisations to establish and maintain management systems that enable good recordkeeping, and that can be audited by third parties such as government regulators or independent auditors. 

In the privacy world, compliance requirements come, in most jurisdictions, directly from applicable laws (GDPR, Australia’s Privacy Act), and practitioners typically focus on these as opposed to seeking out standards. The United States has a patchwork of regulatory requirements affecting privacy, but has seen widespread adoption of the California Consumer Privacy Act (CCPA) for consumer privacy, with other States following suit with similar laws. The US National Standards body, NIST, does however, have a strong track record in standards development for security and now for privacy, in the form of its Cybersecurity Framework, and more recently, its Privacy Framework. However it is important to note that these are not standards, but are voluntary tools issued by NIST to help organisations to manage privacy risk. 

The next time your organisation is looking to align a standard, be sure to understand why, and whether:  

  • meeting the standard helps you establish bonafides to the market, such as via the adoption of the ISO 27001 standards;  
  • independent auditors and other third parties have signalled they will use the standard to guide their audits, such as the ISO 30300 suite;  
  • the standard provides your organisation with a useful tool or framework towards best practice, as found in the foundational standard for recordkeeping, ISO 15489; or 
  • whether regulatory or compliance requirements exist that supersede any standard – and are prescriptive on their own (for example through the Privacy Act and guidance from the OAIC). 

The toothbrush gag is one heard often in standards development circles such as ISO Committees, and it perhaps has a limited audience, but the point it makes is a good one in that standards are – and should be – tailored to users and uses. They do not, however, tackle plaque.  


Photo by Call Me Fred on Unsplash

Patch me if you can: the importance of vulnerability management

This is the first post in a three-part series on vulnerability management. In this post, elevenM’s Theo Schreuder explains why vulnerability management is so important and outlines some key considerations when establishing a vulnerability management program.

In 2017 the American credit bureau Equifax suffered a breach of its corporate servers leading to customer data being leaked from its credit monitoring databases. The fallout from the breach included the exposure of the personal information of almost 150 million Americans, resignation of the company CEO and a reputation battering that included a scathing report by the US Senate.

The breach occurred due to attackers exploiting a vulnerability in the Apache Struts website framework — a vulnerability that was unpatched for over two months despite a fix being known and available.

With a proper vulnerability management program in place Equifax could have prioritised remediation of the Apache Struts security patch and prevented huge impact on consumers, to its reputation, and saved US$575 million in eventual legal settlement costs.

It’s little wonder that vulnerability management features heavily in well-respected cyber security frameworks and strategies, such as the NIST Cybersecurity Framework and the Australian Government’s Essential Eight. Equifax has also come to the party, putting a program in place: “Since then, Equifax said that it’s implemented a new management system to handle vulnerability updates and to verify that the patch has been issued.”

So what is “vulnerability management”?

Vulnerability management is the end-to-to end process from the identification of vulnerabilities in your network to the verification that they have been remediated.

The first priority in vulnerability management is to scan the network. And by the network we mean everything. Servers, routers, laptops, even that weird voice-controlled air-conditioning system you have in your offices. Having visibility of unpatched vulnerabilities in your network allows you to prioritise patching and prevent potential breaches.

In subsequent posts in this series, we’ll step through the key elements that comprise the vulnerability management process and discuss some key challenges and considerations for a well-functioning program.

For now, here are two key consideration when starting to think about establishing a vulnerability management program:

Firstly, it is important to be clear and transparent about the true state of risk in your environment as nothing will get done if the risk is not pointed out. Even if a vulnerability is “risk accepted”, it needs to be continuously logged and monitored so that if a breach occurs you know where to look. Visibility of where the greatest vulnerabilities lie encourages action. It’s easy to fall into an “out of sight, out of mind” approach when you are not getting clear and regular reporting.

Secondly, In order to get this regular reporting, it is advantageous to automate as much as possible. This reduces the effort required to create reports on a regular basis, freeing up resources to actually investigate and analyse vulnerability data.

Stay tuned for the next post in the series.


Read all posts in the series:
Patch me if you can: the importance of vulnerability management
Patch me if you can: the six steps of vulnerability management
Patch me if you can: key challenges and considerations