The growing bipartisan support for privacy legislation seems to be responding to the public “techlash” against a drumbeat of data breaches and social media misinformation campaigns. It also appears aimed at preventing a patchwork of state laws after California passed its own privacy legislation in 2018.
While the time is right to enact a new law, what you may not realize is that data privacy is actually an important economic justice issue. As a clinical law professor representing low-income people for the last 20 years, I have seen how one’s digital privacy experience varies depending on social class.
And poorer Americans are among those who have the most at risk.
Take data brokers, which are companies that sell personal data collected from sources such as public records, Internet browsing activity, social media posts, emails, app usage and retail loyalty cards.
This industry is one reason why you are barraged with online ads for a product you may have glanced at only briefly. For most of us, this is simply an annoying fact of life. For low-income people, the harms extend beyond this shared sense of creepiness.
For example, the digital dossiers assembled by data brokers are used to target low-income Americans for predatory products such as payday loans, high-interest mortgages and for-profit educational scams. These brokers segment consumers into highly specific categories, such as “rural and barely making it” and “credit crunched: city families.”
While a slew of lawsuits pushed Facebook to stop allowing its advertisers to target groups based on gender, race, zip code and age, advertisers can continue to discriminate against people simply because they are poor. Poverty is not a protected category under our civil rights laws or the Constitution.
Meanwhile, police are using big data to predict criminal activity, particularly in low-income and minority neighborhoods. The problem is this creates a vicious cycle in which communities that are already heavily policed trigger predictive software that urges more aggressive policing.
Employers are using applicant tracking systems to predict whether potential employees will perform on the job. Colleges are assessing algorithms to determine which prospective students are likely to stick around for graduation. Landlords are scouring credit reports to predict whether prospective tenants will pay the rent.
And while these can be legitimate objectives, society puts too much faith in the algorithms used to predict human behavior. Computer outputs may have the veneer of objectivity, but human beings impart their own conscious and implicit biases into the software that fuels these predictions. This can reinforce longstanding prejudices.
Not surprisingly, then, in states that rely on algorithms to assess eligibility for public benefits such as Medicaid, thousands of qualified people have been kicked out of programs, imperiling their health and costing lives.
Automated decision-making strips social service delivery of needed nuance.
While always a nightmare, such breaches can be especially devastating for people living on the financial edge. They generally can’t afford the costly and complicated measures needed to clean their credit after someone else steals their identity. Economic losses resulting from a breach can push low-income people over a financial cliff.
All these harms are in part because the U.S. still lacks an overarching privacy law.
Although all 50 states now require companies to notify consumers about data breaches, California is the only state to pass a comprehensive privacy law governing how data is collected and used. However, multiple states are considering similar legislation.
Lawmakers working on a federal privacy law should look to Europe for inspiration.
About a year ago, the European Union began implementing the General Data Protection Regulation, which gives its citizens a bevy of rights to control their data. In particular, it also includes provisions that could enhance the data privacy needs of low-income people.
For instance, the GDPR prohibits certain kinds of automated profiling. This could put the brakes on profiling that limits people’s access to jobs, housing and other life necessities for illegitimate reasons. The law also gives people a right to an explanation about automated decision-making, which could open the current “black box” to help people understand and challenge denials of goods and services.
The law includes a right to be forgotten, which requires personal data must be erased when it’s no longer needed for the original purpose or when a person asks for it to be scrubbed. Fundamentally, it means people can get a clean data slate as their financial condition improves.
We just sent you an email. Please click the link in the email to confirm your subscription!