By Arjun Ramachandran
Pete’s a hardened security professional. He’s been in this game for over 20 years and has the battle scars of many cyber uplift and remediation programs. He feels the pain of CISOs fighting the good fight.
I’m a former journo. Even though I’m no longer actively reporting, the craft matters to me and I get defensive and outraged in equal measures about the quality of print news.
Pete and I butted heads in recent days over the reporting of the NSW Auditor-General’s (AG) report into Transport for NSW and Sydney Trains.
The AG report, and much of the media coverage, was overtly critical. A “litany of cyber security weaknesses [identified in] a scathing review” was how the SMH described it.
Pete wasn’t overly happy about the coverage, feeling both the media reporting and the AG report to be unfair, particularly in the context of how organisations actually go about improving their cyber posture (more on this later).
While I defended the reporting as – for the most part – being a fair and straight write-up of the AG report, I have to concede that on the bigger point Pete was dead right.
There’s a problem in our industry, and in broader society, with how we often talk about and respond to reviews of organisations’ security programs. It’s not that we’re quick to talk about deficiencies that don’t exist or that aren’t serious, but that the way these reviews are presented drives a response and conversation that is counterproductive to good security outcomes.
There are no perfect security programs, or organisations with perfect security. The threat landscape changes daily and security uplifts require complex whole-of-organisation changes that necessarily take a long time. Even in the most mature organisations, “good security” is a work in progress, with gaps continuously identified and addressed. Any detailed review will find holes and inadequacies.
To be clear, this is not an argument to take lightly the adverse findings of a review. Arguably, these findings are a large part of why we do reviews, so that a measured and constructive response to them can lead to improved security.
But too often in our journeys at elevenM we see instances where the supercharged or out-of-proportion delivery of adverse findings leads to an equally supercharged response (sometimes in the form of its own large remediation program) that sees a sizeable redirection of resources, and ultimately the deferral or interruption of well-considered strategies or critical uplift projects.
We found it particularly uncomfortable that the AG report was based on a red team exercise. A red team exercise – where “authorised attackers” (in the words of the AG report) are given permission to try and break into systems – will always find security flaws. These exercises are typically conducted expressly to provide insights to security teams that they can learn from. To broadly publish those findings in the tone and manner that the AG report has done didn’t strike us as constructive.
Helping your business stay abreast and make sense of the critical stories in digital risk, cyber security and privacy. Email news@elevenM.com.au to subscribe.
“But has the horse has already bolted?” That’s the question senior US officials want companies who’ve applied patches for the highly publicised Microsoft Exchange security breach to ask themselves. The ugly Exchange Server compromise headlines our round-up, which also features an IoT breach that snared businesses across a range of industries, and the latest ransomware tactics.
Summary: Four previously unidentified vulnerabilities in the Microsoft Exchange Server have been exploited by state-sponsored actors operating out of China, with some reports citing as many as 60,000 organisations affected.
Key risk takeaway: Being patched against these vulnerabilities might be giving system administrators a false sense of confidence. Having observed numerous concerted attempts to exploit the flaws, US officials are urging companies to take aggressive action to investigate and remediate compromises that may already have occurred (before patching). Accordingly, in addition to moving fast to release patches, Microsoft has published detailed guidance on its website on how to investigate and remediate the vulnerabilities, and even developed a “one-click mitigation tool” for organisations with smaller or less-resourced security teams. To learn more about how to develop a comprehensive vulnerability management program to drive timely remediation of dangerous security flaws (noting once again that patching alone may be insufficient in Exchange incident), check out our recent blog series here.
Summary: Directors of public firms are expected to soon face greater accountability from cyber risks under the Government’s cyber strategy.
Key risk takeaway: Lack of preparation for cyber risks by boards may soon be punishable, as the Government seeks to make changes to directors’ duties in the second half of 2021. The Government is light on details but has cited preventing customer credentials from ending up on the dark web as a potential example of these new obligations. The introduction of these obligations follows the imposition of director duties on directors of financial institutions by APRA’s Prudential Standard 234. The moves are also part of a broader push for the Defence Department to take more forceful steps to “step in and protect” critical infrastructure companies, even if they are in the private sector.
#cyber #APRA #regulations
Summary: Hacktivists gained access to approximately 150,000 Verkada surveillance cameras around the world after finding the username and password for an administrator account publicly exposed on the internet.
Key risk takeaway: This incident is not only a concrete example of oft-described potential security risks of IOT (not to mention the implications of poor password management). It also highlights that risks and impacts from these devices may be felt differently across a variety of sectors. For example, uncomfortable regulatory conversations could arise for some of Verkada’s clients (which include childcare centres and aged-care facilities), given the cameras have built-in facial recognition technology and can be placed in sensitive locations. This incident also highlights ongoing challenges for organisations in achieving effective security assurance over their supply chains, especially cloud-based suppliers.
#cybersecurity #IOT #suppliersecurity
Summary: Universal Health Services (UHS) has reported losing US$67 million from the September ransomware attack that affected a large range of systems.
Key risk summary: The serious financial implications of ransomware continue to be apparent, with UHS’ heavy losses comprising both lost revenue and increased labour costs. Meanwhile Finnish psychology service Vastaamo, whose ransomware challenges we described in October, has now filed for bankruptcy. In a mark of how lucrative ransomware has become, ransomware operators reportedly pulled in $370 million in profits last year. Still, techniques continue to evolve. Researchers recently observed attackers breaching ‘hypervisor servers’ (which organisations use to manage virtual machines). Doing this allows attackers to encrypt all virtual machines on a system, increasing pressure on victim organisations to pay a ransom. In the face of the continued evolution of ransomware, Australia’s Federal Labor Opposition has now called for a national ransomware strategy comprising a variety of measures including regulations, law enforcement, sanctions, diplomacy, and offensive cyber operations. Some of the thinking in the strategy – e.g. around enforcement and sanctions – also aligns with recent expert calls for a global effort to create a new international collaboration model to tackle ransomware.
#ransomware #cybersecurity #costofdatabreach
Summary: WhatsApp deferred the introduction of new privacy terms in order to buy time to better explain the change.
Key risk takeaway: This is one of many recent examples that show us it is no longer sufficient for online services to have a “take it or leave it” attitude in their privacy terms. Having first taken such an approach with its revised privacy terms, WhatsApp had to scramble to explain the changes after “tens of millions of WhatsApp users started exploring alternatives, such as Signal and Telegram”. More broadly, a recent New York Times editorial also argued that current consent models and the default practice requiring consumers to opt-out of data collection practices undermines privacy and must change. In our recent blog post we explore in detail the adequacy of current approaches to consent, which is being examined under the current review of the Australian Privacy Act.
Summary: TikTok agreed to settle 21 combined class-action lawsuits over invasion of privacy for US $92million.
Key risk takeaway: Disregarding appropriate privacy measures will have financial consequences – whether that’s through regulatory fines, legal settlements (as is the case here) or the long-term erosion of user trust. Complaints from the lawsuits against TikTok alleged a range of issues, from using facial analysis to determine users’ ethnicity, gender, and age to illegal transmissions of private data. And just as TikTok said it didn’t want to take the time to litigate the complaints, it was also rated one of the least trusted digital platforms. Privacy responsiveness and social responsibility from digital platforms are fast becoming market differentiators, with 62% of Americans saying search and social media companies need more regulation.
#privacy #transparency #trust
In this post from our ‘Privacy in focus’ blog series we discuss notice and consent — key cornerstones of privacy regulation both in Australia and around the globe — and key challenges in how these concepts operate to protect privacy.
From the 22 questions on notice, consent, and use and disclosure in the Privacy Act issues paper, there is one underlying question: Who should bear responsibility for safeguarding individuals’ privacy?
This is the second post in a three-part series on vulnerability management. In this post, elevenM’s Theo Schreuder describes the six steps of a vulnerability management program.
In the first post of this series, we explored why vulnerability management is important and looked at key considerations for setting up a vulnerability management program for success. In this post, we’ll step you through the six steps of vulnerability management.
The six steps of vulnerability management
Let’s explore each step in more detail.
- Discover vulnerabilities
The most efficient way to discover vulnerabilities is to use a centralised and dedicated tool (for example, Rapid7 InsightVM, Tenable, Qualys) that regularly scans assets (devices, servers, internet connected things) for published vulnerabilities. Information about published vulnerabilities can be obtained from official sources such as the US-based National Vulnerability Database (NVD), via alerts from your Security Operations Centre (SOC) or from external advisories.
Running scans on a regular basis ensures you have continuous visibility of vulnerabilities in your network.
2. Prioritise assets
Prioritising assets allows you to determine which remediation actions to focus on first to reduce the greatest amount of risk within the shortest time and with least budget.
Prioritisation of assets relies on having a well-maintained asset inventory (e.g. a Central Management Database or CMDB) and a list of the critical “crown jewel” assets and applications from a business point of view (for example, payroll systems are typically considered critical assets). Another factor to consider in determining prioritisation is the exposure of an asset to the perimeter of the network, and how many “hops” the asset is from an internet-facing device.
3. Assess vulnerability severity
After devices are scanned, discovered vulnerabilities are usually be assigned a severity score based on industry standards such as the Common Vulnerability Scoring System (CVSS) as well as custom calculations that — depending on the scanning tool — take into account factors including the ease of exploitability and the number of known exploit kits and plug-and-play malware kits available to exploit that vulnerability. This step can also involve verifying that the discovered vulnerability is not a false positive, and does in fact exist on the asset.
When creating reports on vulnerability risk, it’s important to consider different levels of reporting to suit the needs of different audiences. Your reporting levels could include:
- Executive level reporting
This level of reporting focuses on grand totals of discovered vulnerabilities and vulnerable assets, total critical vulnerabilities, and historical trends over time. The aim is to provide senior executives with a straightforward view of vulnerabilities in the network and trends.
- Management level reporting
For individual managers and teams to manage their remediation work, it helps to provide them with a lower-level summary of only the assets they are responsible for. This report will have more detail than an executive level report, and should provide the ability to drill down and identify the most vulnerable assets and critical vulnerabilities where remediation work should begin.
- Support team level reporting
This is the highest resolution report, providing detail for each vulnerability finding on each asset that a support team is responsible for. Depending on the organisation and the way patching responsibilities are divided, splitting out reporting between operating systems (below base) and application level (above base) can also be advantageous as remediation processes for these levels can differ.
5. Remediate vulnerabilities
“The easiest way to get rid of all of your vulnerabilities is to simply turn off all of your devices!”– origin unknown
Remediation can take a variety of forms including but not limited to changing configuration files, applying a suggested patch from the scanning tool or even uninstalling the vulnerable program entirely.
There may be also be legitimate cases where a vulnerability may be exempted from remediation. Factors could include:
- Is the asset soon to be decommissioned or nearing end-of-life?
- Is it prohibitively expensive to upgrade to the newest secure version of the software?
- Are there other mitigating controls in place (e.g. air-gapping, firewall rules)
- Will the required work impact revenue by reducing service availability?
6. Verify remediation
Are we done yet? Not quite.
It doesn’t help if — after your support teams have done all this wonderful work — your vulnerability scanning tool is still reporting that the asset is vulnerable. Therefore, it is very important that once remediation work is complete you verify that the vulnerability is no longer being detected.
Stay tuned for the third and final post in the series, in which we discuss common challenges and considerations for a well-functioning vulnerability management program.
Read all posts in the series:
Patch me if you can: the importance of vulnerability management
Patch me if you can: the six steps of vulnerability management
Patch me if you can: key challenges and considerations
In this post from our ‘Privacy in focus’ blog series, we explore arguments for and against changes to the definition of personal information being considered by the review of the Privacy Act, and the implications of those changes.
One of the simplest but most far-reaching potential amendments to the Privacy Act is the replacement of a single word: replacing ‘about’ with ‘relates to’ in the definition of ‘personal information’.
Supporters of the change (such as the ACCC, the OAIC, and the Law Council of Australia) say it would clarify significant legal uncertainty, while also aligning Australia with the GDPR standard and maintaining consistency between the Privacy Act and the Consumer Data Right regime.
Those opposed (such as the Communications Alliance and the Australian Industry Group) warn that the change may unnecessarily broaden the scope of the Act, potentially imposing substantial costs on industry without any clear benefit to consumers.
To understand why, we’ll dig into the origins of the definition and the present uncertainty regarding its application.
Precision is important
The definition of personal information sets the scope of the Privacy Act. All the rights and obligations in the Act rely on this definition. All the obligations that organisations have to handle personal information responsibly rely on this definition. All the rights that individuals have to control how their personal information is used rely on this definition. Personal information is the very base on which privacy regulation rests.
Any uncertainty in such an important definition can result in significant costs for both individuals and organisations. At best, uncertainty can result in wasted compliance work governing and controlling data that need not be protected. At worst, it can mean severe violations of privacy for consumers when data breaches occur as a result of failure to apply controls to data that should have been protected. Examples of the former are frequent — even OAIC guidance encourages organisations to err on the side of caution in identifying data as personal information. Unfortunately, examples of the latter are even more commonplace — the disclosure of Myki travel data by Public Transport Victoria, the publication of MBS/PBS data by the Federal Department of Health, and Flight Centre’s release of customer data for a hackathon are all recent examples of organisations releasing data subject to inadequate controls in the belief that it did not amount to personal information.
These uncertain times
According to the OAIC, the ACCC, and many others, there is substantial uncertainty as to the scope of ‘personal information’, particularly as it relates to metadata such as IP addresses and other technical information. That uncertainty was partially created, and certainly enhanced, by the decision of the Administrative Appeal Tribunal in the Grubb case, which was upheld on appeal in the Federal Court.
In the Grubb case, the Tribunal found that certain telecommunications metadata was not personal information because it was really ‘about’ the way data flows through Telstra’s network in order to deliver a call or message, rather than about Mr Grubb himself.
The ruling came as a surprise to many. The orthodoxy up until that point had been that the word ‘about’ played a minimal role in the definition of personal information, and that the relevant test was simply whether the information is connected or related to an individual in a way that reveals or conveys something about them, even where the information may be several steps removed from the individual.
Today, it’s still unclear how significant a role ‘about’ should play in the definition. Could one argue, for example, that location data from a mobile phone is information about the phone, not its owner? Or that web browsing history is information about data flows and connections between computers, rather than about the individual at the keyboard?
OAIC guidance is some help, but it’s not legally binding. In the absence of further consideration by the courts, which is unlikely to happen any time soon, the matter remains unsettled. Organisations are without a clear answer as to whether (or in what circumstances) technical data should be treated as personal, forcing them to roll the dice in an area that should be precisely defined. Individuals are put in the equally uncertain position of not knowing what information will be protected, and how far to trust organisations who may be trying to do the right thing.
Relating to uncertainty
Those in favour of reform want to resolve this uncertainty by replacing ‘about’ with ‘relates to’. The effect would be to sidestep the Grubb judgement and lock in a broad understanding of what personal information entails, so that the definition covers (and the Privacy Act protects) all information that reveals or conveys something about an individual, including device or technical data that may be generated at a remove.
Those who prefer the status quo take the view the present level of uncertainty is manageable, and that revising the definition to something new and untested in Australia may lead to more confusion rather than less. Additionally, there is concern that ‘relates to’ may represent a broader test, and that the change could mean a significant expansion of the scope of the Act into technical and operational data sets.
What we think
By drawing attention to ‘about’ as a separate test, the Grubb case has led to an unfortunate focus on how information is generated and its proximity to an individual, when the key concern of privacy should always be what is revealed or conveyed about a person. In our view, replacing ‘about’ with ‘relates to’ better focuses consideration on whether an identifiable individual may be affected.
Industry concerns about expanding the scope of the Act are reasonable, particularly in the telco space, though we anticipate this to be modest and manageable as the scope of personal information will always remain bounded by the primary requirement that personal information be linked back to an identifiable individual. Further, we anticipate that any additional compliance costs will be offset by a clearer test and better alignment with the Consumer Data Right and Telecommunications (Interception and Access) Act, both of which use ‘relates to’ in defining personal information.
Finally and significantly for any businesses operating outside of Australia, amending ‘about’ to ‘relates to’ would align the Privacy Act more closely with GDPR. Aligning with GDPR will be something of a recurring theme in any discussions about the Privacy Act review. This is for two reasons:
- GDPR is an attractive standard. GDPR has come to represent the de-facto global standard with which many Australian and most international enterprises already comply. It’s far from perfect, and there are plenty of adaptations we might want to make for an Australian environment, but generally aligning to that standard could achieve a high level of privacy protection while minimising additional compliance costs for business.
- Alignment might lead to ‘adequacy’. The GDPR imposes fewer requirements on data transfers to jurisdictions that the EU determine to have ‘adequate’ privacy laws. A determination of adequacy would substantially lower transaction and compliance costs for Australian companies doing business with the EU.
Click ‘I agree’ to continue
In our next edition of the Privacy in Focus series, we’ll take a look at consent and the role it might play in a revised Privacy Act. Will Australia double down on privacy self-management, or join the global trend towards greater organisational accountability?
Footnote:  Because of the way that privacy complaints work, disputes about the Privacy Act very rarely make it before the courts — a fact we’ll dig into more when we cover the proposal for a direct right of action under the Act.
Read all posts from the Privacy in focus series:
Privacy in focus: A new beginning
Privacy in focus: Who’s in the room?
Privacy in focus: What’s in a word?
Privacy in focus: The consent catch-22
Privacy in focus: A pub test for privacy
Privacy in focus: Towards a unified privacy regime