Somewhere in the past decade, a phrase became the default dismissal of every privacy conversation: “I have nothing to hide.”

It sounds reasonable. If you’re not doing anything wrong, what’s the problem? Surveillance only threatens criminals, the logic goes. Law-abiding people have nothing to fear from data collection. Privacy concerns are the province of the paranoid, the criminal, or the ideologically opposed to technology.

This argument is repeated so often that it has become a kind of received wisdom. It is also, on careful examination, deeply confused – and the consequences of accepting it have become visible in ways that are hard to ignore.


The Argument Gets Privacy Wrong

The “nothing to hide” argument is built on a specific and narrow model of what privacy is for. In this model, privacy is a shield for wrongdoing. If you’re hiding something, it’s because you’ve done something – or plan to do something – that you shouldn’t. Privacy is guilt-adjacent.

But this isn’t why humans developed privacy norms, and it isn’t how privacy actually functions in a healthy society.

Privacy is the capacity to control your own narrative. To develop thoughts and opinions without external judgment while they’re still forming. To share different aspects of yourself in different contexts – your professional self at work, your vulnerable self with close friends, your political opinions in the voting booth. These aren’t secrets. They’re the normal, healthy compartmentalization that every person manages, offline and online.

The legal scholar Daniel Solove, who has written extensively on privacy, puts it directly: the nothing-to-hide argument focuses on only one or two specific privacy problems – the disclosure of personal information or surveillance – while ignoring the broader set of harms that privacy violations cause. Discrimination. Manipulation. Loss of autonomy. Institutional asymmetry.


The Chilling Effect: When Being Watched Changes Behavior

There’s a well-documented psychological phenomenon that privacy scholars call the chilling effect: when people know they’re being observed, they modify their behavior – even when they’re not doing anything wrong.

This isn’t theoretical. Studies have consistently shown that surveillance awareness reduces willingness to read controversial material, seek sensitive health information, and express dissenting political opinions. A 2016 study published in the Journal of Communication found that post-Snowden revelations about NSA surveillance caused measurable declines in traffic to Wikipedia pages on terrorism-related topics – even among users with no connection to terrorism. They were simply choosing not to leave a record.

This matters because democratic societies depend on people being willing to form and express heterodox opinions. Journalism depends on sources being willing to speak. Political dissent depends on people being willing to be counted. If surveillance – including commercial surveillance – suppresses the willingness to engage with controversial ideas, the effect is a measurable narrowing of what it’s possible to say and think in public.

You don’t need to be planning a crime to be affected by this. The chilling effect works on lawyers who want to research case law involving sensitive topics. On doctors looking up drug interactions for unconventional treatments. On activists who want to understand the arguments of their opposition. On ordinary people who’ve been through something they’re not ready to share with anyone, let alone an algorithmic profiling system.


The Data Economy: Who Actually Benefits

The “nothing to hide” argument implies that data collection is essentially neutral – collected, perhaps, and then inert. The actual data economy operates very differently.

The scale: The global data broker industry was valued at approximately $316 billion in 2023 and continues to grow. Hundreds of companies – many of which you’ve never heard of and never interacted with – hold detailed files on most adults in developed economies. These files can include: purchase history, browsing behavior, location data aggregated from apps, income estimates, relationship status, health inferences, and predictive scores about future behavior.

What it’s used for: The primary use is advertising targeting. But the data flows into many other contexts:

  • Insurance pricing – behavioral and demographic data can inform risk assessments that affect premiums
  • Credit and lending – alternative data sources (your location patterns, your social graph) are increasingly used to supplement traditional credit scores
  • Employment screening – background check companies have access to significant personal histories
  • Housing – tenant screening services use data beyond formal credit checks
  • Dynamic pricing – airlines, hotels, and e-commerce platforms use behavioral data to show different prices to different users

None of this requires you to have done anything wrong. It requires you to exist in a data-collecting world and to have generated a record that an algorithm can score.


”Free” Is the Price You Pay With Attention – and Data

One of the most effective framing shifts for understanding the data economy: nothing on the internet is actually free. Services that don’t charge you money are charging you data. The infrastructure for Gmail, Facebook, Google Search, Instagram, TikTok – all of it is funded by advertising, which is funded by the detailed behavioral profiles those services build from your activity.

This isn’t necessarily malicious. Some of these trade-offs are reasonable. A free email service in exchange for ads targeted to your email topics is a straightforward deal, and millions of people have decided they’re comfortable with it.

The problem is consent and transparency. Most people have no idea how sophisticated these profiles are, how widely the data is shared, or how many third parties receive it. The terms of service you agreed to without reading authorized a data-sharing ecosystem that you had no practical ability to evaluate or negotiate.

When you search for symptoms of a medical condition, Google doesn’t just note that you searched. The search may trigger ad targeting across Google’s entire advertising network. Data may be shared with analytics partners. Health-inferred data may end up in broker databases. You searched for information. You generated a permanent inference about your health status that will follow your advertising profile.

The “nothing to hide” frame misses this entirely because it assumes that data collection is simple and bounded. It is neither.


The Personalization Paradox

Research consistently finds a contradiction at the heart of how people relate to online data collection. Most people say they value their privacy and are concerned about data collection. Most people also enjoy personalized content, recommendations, and experiences that rely on data collection.

This is the personalization paradox: people want the benefits of data collection without the risks, and the system is designed to give them just enough benefit to accept just enough risk.

The resolution isn’t to reject personalization entirely. It’s to be deliberate about what you’re exchanging and with whom. There’s a meaningful difference between Netflix using your viewing history to recommend shows you’ll like (confined to their platform, relatively low risk) and a data broker selling inferences about your health, politics, and financial situation to unknown third parties (broad distribution, significant risk).

Being conscious of this distinction – rather than accepting “I have nothing to hide” as a reason to stop thinking about it – is the beginning of practical privacy literacy.


Privacy as a Collective Good

One argument that the “nothing to hide” framing entirely misses: your privacy decisions affect other people.

When you grant an app access to your contacts, you share the information of every person in your address book – people who never agreed to share their data with that app. When you use a family photo sharing service with permissive data practices, you may be building a biometric database of your children and extended relatives.

Privacy advocates have sometimes compared this to vaccination: your individual decision affects the herd. Every person who normalizes extensive data sharing makes it harder to resist for others, and contributes data to systems that affect people who made different choices.

The “nothing to hide” argument is individualistic: I have nothing to hide. But the data ecosystem is collective. Your data combines with others’ to build population-level models that affect everyone – including people who were careful about their own privacy.


What “Protecting Your Privacy” Actually Means in Practice

The goal isn’t perfect privacy, which is neither achievable nor perhaps desirable. The goal is deliberate, informed choices about what you share, with whom, and under what conditions.

This means:

Knowing what’s being collected. Exercise your GDPR rights if you’re in the EU – request a copy of the data major platforms hold about you. Many people are surprised by the specificity and volume.

Reading privacy policies – or at least the key parts. You don’t need to read every word. Third-party sharing, data retention, and advertising partnerships are the sections that matter most.

Using the technical tools available. Ad blockers, tracker blockers, and browser privacy settings have become significantly more effective. Using them is not paranoia – it’s the digital equivalent of closing your curtains.

Expecting transparency from services you use. Companies that take privacy seriously publish what data they collect, how long they keep it, who they share it with, and why. If a service’s privacy policy is deliberately vague, that vagueness is a choice.


The Argument From Integrity

There’s a simpler version of the privacy argument that doesn’t rely on threat scenarios at all:

You have a right to your own inner life. You have a right to develop thoughts you’re not sure about yet, to make mistakes privately, to change your opinion without that change being documented. You have a right to be a different person in different contexts.

None of this requires you to be doing anything wrong. It simply requires you to be human.

The “nothing to hide” argument, taken seriously, would eliminate the possibility of any of it. Taken seriously, it suggests that there is no legitimate reason for a person to want anything to remain private. That any preference for privacy is inherently suspicious.

That’s not a framework for a healthy society. It’s a framework for a surveilled one.

The question isn’t whether you have something to hide. The question is who gets to decide what counts as “something” – and whether you’ve thought carefully enough about your answer.


Further reading: