Privacy: The moral guidepost for a just society

On Human Rights Day, elevenM Senior Consultant Jordan Wilson-Otto highlights privacy’s fundamental role in human dignity and respect. He argues that advocating for privacy without recognising its position as a human right may prove unfruitful in the long term.


Today is Human Rights Day. It marks the day, in 1948, on which the United Nations General Assembly adopted the Universal Declaration of Human Rights (UDHR), which lies at the heart of the promotion of human rights under international law.

Privacy is one of the human rights articulated under the UDHR, and this year was a big one for privacy in Australia. We’ve seen a major shift in the regulatory landscape, which brings some exciting potential but also great risk should we take our eye off privacy’s origins as a human right.

Yes, privacy is having something of a revival. Over the last few years, we’ve seen a substantial increase in public awareness and renewed interest from lawmakers and regulators across the globe. Privacy is being talked about in a lot of new places, and in a lot of new ways. This is unsurprising. Data has become so central to modern economies that data (and therefore privacy) is now a key driver of productivity, and user data is so essential to the advertiser funded digital ecosystem that it is both a measure of market power, and a focus for consumer protection.

This year, the Australian Competition and Consumer Commission (ACCC) released its Digital Platforms Inquiry Final Report, which highlighted the intersection of privacy, competition and consumer protection. The ACCC recognised that strong privacy laws can support and even drive competition and consumer protection objectives – for example by addressing sources of market inefficiencies such as information asymmetries and bargaining power imbalances and by empowering consumers to make informed choices about how their data is processed. This, in turn, can increase competition and encourage innovation.

These themes were echoed and reinforced in the ACCC’s Customer Loyalty Schemes Final Report, released this month. Together, the reports signal a sustained focus on the ways in which data practices can raise competition and consumer issues. This represents a significant shift in the regulatory landscape and is likely to weigh heavily on how our privacy rights are managed in the coming years.

Principle and utility

Both the ACCC and the Productivity Commission argue for privacy in a way that is quite distinct from its position as a human right. They argue that privacy should be protected, but not because it is essential for our dignity and autonomy as individuals, or because it protects us from discrimination or preserves rights such as freedom of expression and freedom of association. They value privacy for its economic contribution, rather than as a necessary precondition for individual freedom and democracy.

And that’s fine, up to a point – there’s no harm in talking about the utility of human rights protection. For privacy advocates this can even be an effective strategy. Hitching privacy protection to these economic objectives provides a concrete and quantifiable argument for the protection of privacy, which may resonate in ways that the language of rights does not.

But arguments from utility are fragile. They only work when everything is pulling in the same direction. When the trade winds inevitably change and privacy is seen to stand in the way of competition or innovation, its utility falls away. By ignoring privacy’s intrinsic value, utilitarian arguments give us no way to weigh privacy against competing interests.

So it’s important that utility doesn’t become our only strategy. We need to keep view of the intrinsic value of privacy as a fundamental human right. As a human right, privacy carries a unique moral weight. It forms part of a universal baseline of dignity and respect that to which we are all entitled. A baseline that has endured, with near universal consensus for over 70 years. Human rights like privacy provide the moral guideposts for a just, equitable and humane society.

If we become too focused on the practical utility of privacy, we risk losing sight of its deeper significance. If that happens, we risk trading it away for less than it is worth.

elevenM’s submission to Australia’s 2020 Cyber Security Strategy

As a passionately Australian company, elevenM is emotionally invested in the safety and prosperity of this country. We recognise that national progress will increasingly depend on our collective ability to answer the significant challenges of the cyber domain.

That’s why we  were excited to lend our voice to the development of Australia’s 2020 Cyber Security Strategy, by responding to the Australian Government’s call for views.

Our contribution, which we submitted earlier this month, highlights areas we feel we should be collectively taking a closer look at. These include:

  • Taking a national approach to managing supply chain risks
  • Engaging cyber security service providers in national cyber security initiatives
  • A sharper focus on attracting and developing strategic, executive-level cyber security talent, and
  • A stronger national voice on cyber security, privacy and data issues

Our submission, and these recommendations, draw on our direct experience as cyber security and privacy practitioners. In working with prominent Australian businesses and government agencies on their digital risk challenges, we’ve observed both emerging challenges for individual businesses as well system-wide issues and patterns.

We hope our submission will be a constructive contribution to the development of the Australia’s 2020 Cyber Security Strategy.

Click here to read our full submission.

 

Sustaining the value from your security tools

This is the second post in a two-part series by elevenM Senior Project Manager Mike Wood on how businesses can benefit most optimally from the deployment of security products.


In the first part of this blog series, we explored how to extract value from security products. In this post we discuss how to sustain and extend this value, especially as your tool evolves.

You’ve nearly finished your delivery project and you’ve got some great data on the value the tool is starting to deliver.  You’re also clear on how you’ll measure remaining value.  Stakeholders are pleased with what they’re seeing.   Time to focus on the next thing, right?

Not necessarily.  Success with security products is not just about getting them to work in today’s context.  It’s about how they will work and improve over time.  Attackers don’t stand still.  Threats evolve and tools and security processes and procedures must keep pace.  A benefit of SaaS security tools is much of the advancement is done by the vendor.  But these benefits will be lost if you haven’t got the capability and/or capacity to keep pace.

The SaaS vendor will keep the tool working and manage uptime.  Your tech support teams can look after the integration points and manage user access.  How you effectively support the tool’s outputs and outcomes that deliver value / sustain benefits over time is critical to success.

It is therefore essential to build a support model for not just the tool, but the tool’s value.

A value support model needs to take the benefits and associated context and align them to how business processes run and the metrics/incentives of the people who are responsible.  Who does the work?  Do they have the skills and capacity?  Have they been trained?  How are they rewarded and what KPIs are in place? How do escalations flow?

Surely, delivering a value support model is part of project success?!  You’re right.  It should be.  But often support is thought of in narrow terms – does the tool work, does it deliver the data we need. Value support is often missed.

An example of a value support model is with a Cloud Access Security Broker (CASB), a tool used to enforce security policies for your business’ use of cloud services.

A CASB can flag alerts, but it is how those alerts are handled where much of the value lies.  How are alerts prioritised?  What SLAs are in place within the SOC / Forensics / Security team who manage alerts and coordinate responses to them?  How do alerts and trends feed into cloud governance and architecture decisions and strategy?  A CASB value support model will have specified and tested this, meaning the organisation doesn’t just have a tool it can run, but outcomes it can actually use to the fullest possible extent to drive security uplift and deliver the target benefits.

Our advice is to get the project to design a value support model as early as practicable in the project.  This model should align to the vendor’s product roadmap and your organisation’s security goals and strategy.  Stakeholders should be consulted and agree on a governance approach for the tool’s threat area and the tool outputs that will drive decision-making.

If you are clear about all this early on, it will allow you to test the value support model and make iterative improvements in lockstep with tool deployment (typically, such improvements will also cost you less and be less disruptive if made during the project than afterwards).  It will also give you a clear view on the funding requirements for the tool.

Building the value support model is something we help businesses with. It requires a blend of key skills and experience: security knowledge, program delivery, systems integration and support.  Investing in getting this right is key to success and also contributes towards higher cyber security maturity, which examines process efficacy as well as the systems in use.

If you have also clearly defined how you’ll measure value (per the first post in this series), then tying this value and its associated support model to a funding request will allow you to make a powerful business case to take the steps needed to not just deliver powerful new security capabilities that deliver value now, but long into the future.

Then you can focus on the next thing (as much as your value delivery model allows!).

Getting value from your security tools

It is easy to spend significant time and effort deploying a security product, only to find that it is difficult to prove the value delivered. In this two-part blog post, elevenM Senior Project Manager Mike Wood explores how to extract and sustain value from security products.


The true uplift value from any security tool comes from the context in which it is used, rather than the capabilities of the tool itself.

Consider a Web Application Firewall (WAF), which filters and blocks web traffic. A WAF is only as effective as the rules it follows and how alerts are responded to. How WAF rules are set and maintained over time, and the level of automation you use in responding to alerts, will affect the protection effectiveness of your WAF.

You could pay the SaaS vendor to do this for you, but is that good value for money? Does the vendor know your business, its context and the threats you face as well as your own people do?  Would handing this over to the vendor also mean you miss out on building knowledge among your team?

Another example is code scanning; it is only as effective as the vulnerability management process that acts on the outcomes from the scans. If you find vulnerabilities using a sophisticated tool, but don’t act on them (or you report them, but they don’t get remediated by development teams) then it’s hard to gain true value from such tools.

When tools are evaluated not just in a risk context but also in terms of people and process, value can be shown.  A good way to start is by asking yourself questions such as:

As a result of this tool or its outputs …

  • What manual security work will be automated?
  • What actions does the tool require and what processes and people capabilities need to be built to support it?
  • Are we delivering more effective security risk mitigations for the same level of effort or funding?
  • What is the contribution to overall reduction in risk? [an easy one to measure is financial impact, such as value of fraud prevented or reduction in cost spent recovering from successful attacks]
  • Are security processes running faster / is there a positive impact to velocity?
  • What is the tool’s contribution to NIST maturity and regulatory or other such obligations? [this is best measured by independent reviews]
  • What data and insights do we have that we didn’t have before and what valuable activities and outcomes have these enabled? [for example, anti-automation/bot protection tools can show you the bad automated traffic that’s hitting your assets, informing your understanding of attack vectors you’re susceptible to and how to address them. Anti-automation tooling may be enough, but to meet the objective of layered security you may also identify code changes.  Data and insights are crucial to being able to analyse, prepare, risk-assess and make such decisions.]

Another important contextual consideration is the impact to teams outside of security.  To gauge how they value security, ask questions such as:

As a result of this tool or its outputs …

  • Is security awareness increasing as a result of this tool or its outputs (measured, for example, by increased proactive engagement with the security team)?
  • Is the security team perceived differently – for eg, as less ‘disruptive’ by developers?
  • Have we freed up our security teams (typically scarce resources in the market) to focus on exceptions, forensics and strategic improvements, by empowering non-security teams to run repeatable security activities that we can then monitor and advise on?

To truly realise the value of security products, we need to have a clear view on their broader context.

In the next part of this series, we explore how to sustain and build on the value delivered by security tools over time.

GovHack: a lesson in optimism

elevenM Senior Consultant and Victorian State Director of GovHack, Jordan Wilson-Otto explains why it’s important to maintain a sense of optimism about the future of technology and society.


It’s judging time for GovHack, the largest open data hackathon in the southern hemisphere. Looking through this year’s submissions, I’ve been thinking about how GovHack’s mission of optimism, civic engagement and empowerment presents a partial answer to the hard questions we raised in our recent post about parenting, privacy and the future.

When we talk about technology, it’s easy to focus on the things that can go wrong. Few things in this world don’t have secondary effects, and we need to think about the implications of the systems that we are building and using so that we can harness their benefits while anticipating and mitigating their downsides.

Undue focus on benefits can lead to bad outcomes, but undue focus on harms can lead to bad outcomes too. If we can’t imagine a more equitable, sustainable or humane world, or a world where technology has made life better and not worse, then there can be no progress. The best we can hope for is stasis, or perhaps the return to some imagined golden age.

But optimism is hard. Almost all the modern narratives about technology are dystopian. Automation is coming for our jobs, algorithmic bias is perpetuating inequalities and killer robots are just around the corner. Meanwhile surveillance capitalism leads to our every online move being tracked, while spy agencies look on and hackers and trolls wait in the wings ready to pounce. And we’re powerless to respond, disabled by an increasingly polarised and dysfunctional political discourse, powered by social media.

So the solution falls to the individual – we’re taught to fear and protect ourselves from technology. We need to watch out for scams, not reuse passwords, be careful what we download or where we browse, and not click links in emails. We’re supposed to read privacy policies, scrutinise permissions, install add blockers, delete cookies and somehow keep track of the changing data practices of the thousand different apps and online services that we use.  We need to look out for trolls, and be alert to the threats of cyber bullying, online harassment and other forms of online abuse.

I think we owe it to ourselves to inject a bit of optimism every once in a while. It’s not all that important how we do it. Maybe read some utopian science fiction, watch some Star Trek, or just consider how far we’ve already come as a species. For me, this is where GovHack comes in – it’s a perfect lesson in optimism. An annual refresher on civics and the power of community, and a reminder that the shape of our technology and our world is not a given, and that technology is just a set of tools that we can build and apply as we need, to the problems we choose.

GovHack is a free, weekend-long creative competition that takes place across Australia and New Zealand. It’s a ‘festival of ideas, using open government data to make our communities better places’. Competitors have 46 hours to make something cool with open government data.  What people make is really up to them. It could be an app, some kind of informative visualisation, a prototype gadget, a game, a story, an artistic display or anything else they can think of.

Projects vary from the whimsical to the deeply practical, and from simple to highly technical. You can see some of this year’s projects here, but some highlights include:

  • Are you really going to drive tomorrow?’, which uses AI to predict days when a user’s commute is likely to be particularly congested, and prompts the user in advance to consider other options.
  • Ripple effect’, an interactive story about everyday encounters that shows users how simple choices that you wouldn’t associate with water, affect the supply and distribution of water.
  • Once upon a crime’, a song about Australian convicts and their history, which draws on multiple data sources about Australian convicts.
  • Insight without sight’ sought to make open data more accessible for visually impaired people by providing a way of using sound to convey data in a graph, combined with a new way to access open government data through a voice command interface with Queensland’s Open API.

Sometimes projects go on to be successful start-ups, or lead to lasting improvements or new and better ways of doing things in government. But for the most part, GovHack projects don’t last beyond the weekend. And that’s ok – in fact, that’s kind of the point.  You’re not going to fix the world with a song and a story. We know this. The problems we face are real and will require both expertise and sustained commitment to solve, if they can be solved at all. But songs and stories are so nice. And they represent a willingness to engage with data, with government, and with the rest of our community to think about the world we live in. A willingness to play with ideas and try to imagine something new.

That idea of ‘play’ is important here too – paradoxically, it can be the license not to solve the world’s problems that gives us the creative freedom that we will need to solve the world’s problems.

So, in working on the big problems, let’s not limit ourselves to avoiding harms. Let’s take a lesson from GovHack on the value of play and all things surprising and tangential. Let’s remember that our current technologies and ways of thinking are just one way of doing things – the right solutions might be just around the corner, if only we give ourselves license to get there.

The unfairness of cyber awareness

elevenM Principal Arjun Ramachandran explores why cyber awareness matters, despite the prevalence of seemingly unstoppable sophisticated cyber-attacks.



“Deserve got nuthin’ to do with it. It’s his time, that’s all.”
– Snoop, The Wire.

We want to believe our behaviours solely determine the outcomes we get. But it’s not always the case, especially in the complex cyber realm.

The brilliant US drama The Wire made an artform of summing up life’s hard truths in pithy one-liners, delivered in the language of the street. In Season 5, drug gang member Snoop is asked by a junior gang member whether a target really “deserves” to be “hit”. Her response (above) lays bare the unfairness at the heart of the adversarial drug war.

Cyber security too, ain’t always fair. The existence of a committed, human adversary is a significant and differentiating feature of cyber risk that those of us involved in the field should keep in mind.

Especially in the areas of security training and education. We often seek inspiration from areas like public health, where highly-acclaimed campaigns have raised awareness of the risks of smoking and sun cancer, driving down public exposure to these activities and vastly reducing the incidence of bad outcomes.

But these areas don’t have a human adversary. In cyber, for all of our awareness and reduction of risky behaviours, it remains the case that a determined, highly-sophisticated attacker could still get at a company’s crown jewels by persistently probing for small areas or moments of weakness.

The attack on the Australian National University is a shining example, recently and evocatively labelled a “diamond heist” by its vice-chancellor, rather than a “smash and grab”.

“It was an extremely sophisticated operation, most likely carried out by a team of between five to 15 people working around the clock”. – ANU vice-chancellor Brian Schmidt

While it may be true that a well-educated and aware workforce might not “deserve” to get hacked, Snoop’s street wisdom and the ANU hack suggest that increasing the awareness of end users may still not be enough to prevent the most sophisticated attacks, such as those by highly-skilled state-sponsored attackers.

And awareness on its own stands to be defeated. The UK’s National Cyber Security Centre points out that people-focused activities such as education must come with technical controls, as part of a multi-layered approach. That’s a sentiment recently echoed by the Australian Government.

“But like all other forms of security, awareness is a complement to, not replacement for, the availability of secure features. For example, drivers are provided with a seat belt in addition to education about the importance of road safety and incentives to use the seat belt. And the same expectations and requirements we have where safety is paramount should apply in cyberspace” – Australia’s 2020 Cyber Security Strategy – A call for views

But we also can’t throw the baby out with the bath water.

In our travels, we occasionally come across a certain bluntness or defeatism about cyber awareness. Because of the success of and attention given to state-sponsored attacks, education and awareness is labelled “ineffective”, technical controls are deemed all that matter.

In our view this is a severe over-correction.

It pays to remember that there exists a broad swathe of attackers – not every attacker coming for a small business (or even an enterprise) is bankrolled by a rogue state and has access to an arsenal of zero-day exploits.  

In fact, many are commercially-motivated cybercriminals of varying levels of ability, plying their trade using commodity tools purchased off underground marketplaces. They can be as sensitive to cost pressures as the CEO of a cash-poor business. Anything that makes it harder (ie costlier) to achieve their goals may be enough to deter these actors to move on to another easier, more cost-effective target.

One of the ways we help businesses do this – such as through our recently developed learning packages – is by raising employees’ awareness to the risks and also providing actionable advice on how they can make the average cyber attacker’s life that little bit more frustrating. Maybe a stronger password, or a healthier skepticism to dubious emails will do the trick.

While technical controls might overtake end-user awareness as the best response to a specific cyber threat (eg. some now argue multi-factor authentication should be prioritised as a response to phishing), when that happens an effective awareness program can re-deploy the fruitful conversation it has established with staff to the next evolving area of risk (for eg. how staff use cloud services).

In this way, over the long term awareness activities also continually embed a sense of responsibility and ownership in a workforce, acting as a precursor to and an enabler of a secure culture.