Privacy, COVID and vulnerable communities in the golden city

elevenM’s Cassie Findlay brings a first-hand account of how privacy considerations are playing a role in shaping COVID-19 outcomes in parts of the US. 

It’s no secret that the city of San Francisco and the surrounding counties that make up the Bay Area are home to some of the most stark inequities in the world.  

Having just returned home after four and a half years living and working there, I can confirm that the evidence of staggering wealth existing side by side with extreme poverty and homelessness is everywhere, and it is shocking. Encampments dot the city streets in which people are lacking in basic sanitation and medical services. Solutions are often temporary and deployed only in response to residents’ complaints.  

Bringing a pandemic response into this mix was never going to be easy. The local and State governments’ response to the COVID crisis has, by overall US standards, not been too bad, but not necessarily for its most vulnerable people.  

A case in point can be found in the axing late last year of a testing program offered by the Google affiliate Verily, by the cities of Oakland and San Francisco. Introduced in March, the platform screens people for symptoms, books appointments, and reports test results. Unfortunately, from a privacy perspective, the design of the program added friction to the uptake of critical services in a pandemic.

In a letter to the California Secretary of Health, the City of Oakland’s Racial Disparities Task Force raised concerns about the collection of personal data on the platform amidst a crisis of trust amongst Black and Latinx communities in how their personal information might be used or shared by governments and corporations. Participants were required to sign an authorisation form that says their information can be shared with multiple third parties involved in the testing program, including unnamed contractors and state and federal health authorities. 

As explained by the Electronic Frontier Foundation’s Lee Tien to local public radio station KQED: “While the form tells you that Verily may share data with ‘entities that assist with the testing program,’ it doesn’t say who those entities are. If one of those unnamed and unknown entities violates your privacy by misusing your data, you have no way to know and no way to hold them accountable.”  

Given the need for better and more accessible testing for people experiencing homelessness, and the known severity of the impact of COVID on Black and Latinx communities, obstacles like this to testing uptake are concerning. Other testing services in Oakland and San Francisco have fortunately adopted approaches based on more direct engagement and building of trust in these communities, as opposed to defaulting to an app-based solution with the trust and privacy concerns that entails.  

This case shows just how much trust issues around the use of personal information can affect critical services to vulnerable communities, and it has valuable lessons for those of us working on the delivery of public services with technology. 

My key takeaways are: 

  • Consumers understand and take seriously the trade-offs involved in exchanging personal information for services, discounts and other benefits. 
  • We are moving beyond approaches to data collection that treat consumers as a homogenous group in terms of their willingness to share, but we can safely assume that unknown secondary purposes for their data will be always be regarded with suspicion. 
  • Success will increasingly depend on having a more nuanced picture of your ‘customers’, including their trust in your organisation or sector, whether it be commercial enterprise or public health services. 
  • Building a data governance strategy that can track and maintain a picture of your business, actors within the business including end users or customers, and evolving requirements — including less tangible ones like societal attitudes  is a great foundation for privacy policy and practice that respects diversity and can evolve as the landscape changes around you.

 

What non-mask wearers teach us about security awareness

elevenM Principal Arjun Ramachandran explores why observance of coronavirus restrictions and advice varies across countries and societies, and the potential lessons for those in the game of persuading people to adopt good security behaviours.  


“Wear a mask”. “Practice social distancing”. “Isolate”.

Clear, consistent, universal.

But cast your eyes from country to country, even community to community, and you see incredible variance in how the advice sticks.

The management of COVID-19 in the community highlights a core challenge in how companies cultivate positive security and privacy behaviours among their people. Clear guidance and engaging messages alone don’t always get the job done.

As public health practitioners have learned through the pandemic, and as those of us engaged in security and privacy persuasion must recognise, we work in a broader context.

The fingerprints of culture are evident in how different societies are responding to coronavirus guidelines and restrictions. Values like individualism, community, mutual obligation, respect for the elderly and deference to authority – and the extent to which they dominate a culture – clearly influence how communities behave, and how they will respond to advice and guidance.

“Maybe we’ll change our culture so that it’s not expected or brave of you to go to work sick. Maybe we’ll start to protect each other the way Asian cultures do. It’s pretty normal in Asian societies to wear a mask when you’re sick when you go out in public and to stay home if you can. We are the exact opposite. We wear masks to protect ourselves and we feel free to show up at a meeting when we have a fever.”
VICE

Sure – when you’re trying to inculcate good security or privacy practices, repeatedly broadcasting actionable advice will get these messages onto the radar of employees. Heck, if you’re clever enough to make the advice funny or entertaining, it might even go viral! You’ll have smashed a bunch of internal engagement metrics and hit some awareness goals.

But as with “Wear a mask!”, lack of awareness isn’t always the barrier. People can know what to do and still act contrarily. Or, they might follow the rules, but only in circumstances where compliance is monitored or defined.

If we want go beyond compliance, and if we want behaviours to be both lasting and self-applied across contexts, then our goal must be for employees to internalise and identify with those desirable behaviours.

That’s why we encourage organisations embarking on security or privacy education activities to look at shaping culture as a vital complement (if not a precursor) to their education and awareness activities.

Culture is ultimately an expression of shared values and beliefs expressed through collective behaviours and practices.

Research tells us that values, more specifically an alignment of values, creates conditions for people to internalise behaviours.

Yet while organisations abound in discrete bits of security advice (“don’t click this, make sure you do that”), the values underpinning the desired security and privacy behaviours are often never defined or articulated with employees. It could be as simple as revisiting the company’s existing set of corporate values and expressing how security or privacy are integral to that value set.

For staff to identify with values and desired behaviours, they will also expect to see them being exhibited and advocated by those they admire or respect. This is where an organisation’s high-profile security champions can play a role, and where its most senior leaders have a responsibility.

For more on security culture, check out our recent work.

Towards a trustworthy COVIDSafe app

elevenM Principal Melanie Marks has joined other leading privacy experts in a submission to the Australian Government on what is required of new federal legislation that will govern the new COVIDSafe app. 


The COVIDSafe app has been introduced at an unprecedented moment and a time of national urgency. To ensure we garner the level of community trust necessary for the app to succeed, we also need unprecedented and urgent legislation that ensures the right privacy safeguards are in place.

This is the essence of a submission made to the Attorney General’s Department by Australia’s leading privacy thinkers.

The submission –  led by Peter Leonard (Principal, Data Synergies) and taking input from leading privacy practitioners including elevenM’s Melanie Marks – warns of a “backdoor” that could lead to leakage of data belonging to users of the COVIDSafe app, if new federal legislation governing the app is introduced without sufficient safeguards and coverage.

The paper lays out a series of suggestions to achieve the ultimate objective of ensuring the COVIDSafe app is safe for all citizens to use for its stated purpose of contact tracing.

State and Territory agencies – who will ultimately handle user data from the app – are currently not regulated by the Privacy Act. While the app states that a user’s data – which includes a log of other users of the app they have come in contact with – will only be used for contact tracing by State or Territory officials, the paper notes that enforcement of this currently relies merely on “agreement” and reassurances of “good intent”.

It argues for “legislated assurance” that the data won’t be potentially available to other government agencies, law enforcement and so on.

The paper recommends stronger safeguards and controls to ensure handling of COVIDSafe data by agencies is separated from other operations. It also calls for oversight of the legislation by a commissioner or ombudsman, and the encryption of all COVIDsafe app data in transit and at rest.

Read the full paper here.

Four principles for contact tracing technology

elevenM Principal Melanie Marks takes a closer look at proposals to use digital technology to support contact tracing, as governments seek better ways to manage the COVID-19 pandemic.


With reports that Australia may follow in Singapore’s footsteps to build a tracking and tracing app which allows governments and citizens to get ahead of the COVID-19 pandemic, we must ensure that innovation and laws are channeled towards the “right” intended outcomes.

The benefits of introducing greater data sharing at a time of crisis are obvious. However, there are also risks, so it’s critical we proceed in a considered way.

For me the key principles are:

  1. Do what you can to save lives.
  2. There shall be no scope creep.
  3. Permissions shall be wound back when the crisis passes.
  4. Post implementation review is essential (covering law and processes).

We need to build for the short term or at least for a series of stages, featuring “gates” where civil liberties are checked before continuing. And we need guarantees that new architectures being introduced will not be put to secondary purposes. For example, whilst we might consider it okay to trace the movements of a COVID-19 affected patient in order to prevent exposure to others (primary purpose), we should not accept that the tracing can be used to identify how far a person strays from home, in order to hit them with a fine (secondary purpose). This is especially so if we consider that channels of procedural fairness may be harder to access in the circumstances (Robodebt comes to mind).

I had a chance to discuss these ideas recently with Jeremy Kirk, together with Patrick Fair and Susan Bennett, in an article published in DataBreachToday. Click here to read more.

Privacy: The moral guidepost for a just society

On Human Rights Day, elevenM Senior Consultant Jordan Wilson-Otto highlights privacy’s fundamental role in human dignity and respect. He argues that advocating for privacy without recognising its position as a human right may prove unfruitful in the long term.


Today is Human Rights Day. It marks the day, in 1948, on which the United Nations General Assembly adopted the Universal Declaration of Human Rights (UDHR), which lies at the heart of the promotion of human rights under international law.

Privacy is one of the human rights articulated under the UDHR, and this year was a big one for privacy in Australia. We’ve seen a major shift in the regulatory landscape, which brings some exciting potential but also great risk should we take our eye off privacy’s origins as a human right.

Yes, privacy is having something of a revival. Over the last few years, we’ve seen a substantial increase in public awareness and renewed interest from lawmakers and regulators across the globe. Privacy is being talked about in a lot of new places, and in a lot of new ways. This is unsurprising. Data has become so central to modern economies that data (and therefore privacy) is now a key driver of productivity, and user data is so essential to the advertiser funded digital ecosystem that it is both a measure of market power, and a focus for consumer protection.

This year, the Australian Competition and Consumer Commission (ACCC) released its Digital Platforms Inquiry Final Report, which highlighted the intersection of privacy, competition and consumer protection. The ACCC recognised that strong privacy laws can support and even drive competition and consumer protection objectives – for example by addressing sources of market inefficiencies such as information asymmetries and bargaining power imbalances and by empowering consumers to make informed choices about how their data is processed. This, in turn, can increase competition and encourage innovation.

These themes were echoed and reinforced in the ACCC’s Customer Loyalty Schemes Final Report, released this month. Together, the reports signal a sustained focus on the ways in which data practices can raise competition and consumer issues. This represents a significant shift in the regulatory landscape and is likely to weigh heavily on how our privacy rights are managed in the coming years.

Principle and utility

Both the ACCC and the Productivity Commission argue for privacy in a way that is quite distinct from its position as a human right. They argue that privacy should be protected, but not because it is essential for our dignity and autonomy as individuals, or because it protects us from discrimination or preserves rights such as freedom of expression and freedom of association. They value privacy for its economic contribution, rather than as a necessary precondition for individual freedom and democracy.

And that’s fine, up to a point – there’s no harm in talking about the utility of human rights protection. For privacy advocates this can even be an effective strategy. Hitching privacy protection to these economic objectives provides a concrete and quantifiable argument for the protection of privacy, which may resonate in ways that the language of rights does not.

But arguments from utility are fragile. They only work when everything is pulling in the same direction. When the trade winds inevitably change and privacy is seen to stand in the way of competition or innovation, its utility falls away. By ignoring privacy’s intrinsic value, utilitarian arguments give us no way to weigh privacy against competing interests.

So it’s important that utility doesn’t become our only strategy. We need to keep view of the intrinsic value of privacy as a fundamental human right. As a human right, privacy carries a unique moral weight. It forms part of a universal baseline of dignity and respect that to which we are all entitled. A baseline that has endured, with near universal consensus for over 70 years. Human rights like privacy provide the moral guideposts for a just, equitable and humane society.

If we become too focused on the practical utility of privacy, we risk losing sight of its deeper significance. If that happens, we risk trading it away for less than it is worth.