Artificial Intelligence in Policing Is the Focus of Encode Justice

This op-ed talks about how artificial intelligence creates a feedback loop of discrimination.
A surveillance camera hangs on the exterior of the Federal Bureau of Investigation  headquarters
Bloomberg

Nijeer Parks was bewildered when he was arrested and taken into custody in February 2019. Apparently, he’d been accused of shoplifting and attempting to hit a police officer with a car at a Hampton Inn, as the New York Times reported. But Woodbridge, New Jersey, where the crime had taken place, was 30 miles from his home, and Parks had neither a car nor a driver’s license at the time, according to NBC News. Court documents indicated that he had no idea how he’d been implicated in a crime he knew he didn’t commit — until he discovered that the case against him was based solely on a flawed facial-recognition match. According to a December report by the Times, this was the third-known instance of a wrongful arrest caused by facial recognition in the U.S. All three of those victims were Black men.

Algorithms failed Parks twice: First, he was mistakenly identified as the suspect; then, he was robbed of due process and jailed for 10 days at the recommendation of a risk assessment tool used to assist pretrial release decisions. These tools have been adopted by courts across the country despite evidence of racial bias and a 2018 letter signed by groups like the ACLU and NAACP cautioning against their use. At one point, Parks told the Times, he even considered pleading guilty. The case was ultimately dropped, but he’s now suing the Woodbridge Police Department, the city of Woodbridge, and the prosecutors involved in his wrongful arrest.

These are the costs of algorithmic injustice. We’re approaching a new reality, one in which machines are weaponized to undermine liberty and automate oppression with a pseudoscientific rubber stamp; in which opaque technology has the power to surveil, detain, and sentence, but no one seems to be held accountable for its miscalculations.

Stay up-to-date with the Teen Vogue politics team. Sign up for the Teen Vogue Take!

U.S. law enforcement agencies have embraced facial recognition as an investigative aid in spite of a 2018 study from MIT that discovered software error rates ranging from 0.8% for light-skinned men to 34.7% for dark-skinned women. In majority-Black Detroit, the police chief approximated a 96% error rate in his department’s software last year (though the company behind the software told Vice they don’t keep statistics on the accuracy of its real-world use), but he still refuses a ban.

Artificial intelligence (AI) works by supplying a computer program with historical data so it can deduce patterns and extrapolate from those patterns to make predictions independently. But this often creates a feedback loop of discrimination. For example, so-called predictive policing tools are purported to identify future crime hot spots and optimize law enforcement resource allocation, but because training data can reflect racially disparate levels of police presence, they may merely flag Black neighborhoods irrespective of a true crime rate. This is exactly what Minority Report warned us about.

Princeton University sociologist Ruha Benjamin has sounded the alarm about a “new Jim Code,” a reference to the Jim Crow laws that once enforced segregation in the U.S. Others have alluded to a tech-to-prison pipeline, making it crystal clear that mass incarceration isn’t going away — it’s just being warped by a sophisticated, high-tech touch.

That’s not to say that AI can’t be a force for good. It has revolutionized disease diagnosis, helped forecast natural disasters, and uncovered fake news. But the misconception that algorithms are some sort of infallible silver bullet for all our problems — “technochauvinism,” as data journalist Meredith Broussard put it in her 2018 book — has brought us to a place where AI is making high-stakes decisions that are better left to humans. And in the words of Silicon Valley congressman Ro Khanna (D-CA), the technological illiteracy of “most members of Congress” is “embarrassing,” precluding effective governance.

That must change; racial, economic, and social justice require algorithmic justice. My peers and I are leading the charge to ensure that AI leaves no one behind. Last July, I learned about California Proposition 25, which would have mandated the use of pretrial risk assessment algorithms. I leapt into action and launched a campaign to harness the power of youth in opposition to the measure. We organized aggressively, hosting phone banks and town halls in partnership with community groups and the formerly incarcerated. We offered the sole voice of high school students on the issue. California voters eventually rejected Prop 25 by a 13% margin, aligning themselves against corporate interests, the political establishment, and analysts’ expectations.

With that victory under our belt, we became Encode Justice, an international, youth-powered organization fighting for ethical AI. Our team spans 23 U.S. states and 11 countries. We’re crafting policy proposals, building legislative momentum, teaching AI ethics workshops, and expanding our Medium publication. We’ve lobbied city councils to ban facial recognition and are now establishing a collaborative committee across student-led organizations. At the heart of our work is a commitment to uniting high school technologists, activists, and creatives in what could be a defining battle of our time. You can join us by telling your member of Congress that you support a moratorium on face surveillance, applying for our fellowship program, starting a regional chapter, or signing up to contribute an article.

Without intervention, the promise of AI may be quickly eclipsed by its perils. This isn’t an abstract technical phenomenon; it’s a 21st-century civil rights issue. Our generation is the most progressive yet, and we’ve only ever known a world shaped by the internet. If we’re not on the front lines of regulating technology, we risk being complicit in turning isolated incidents into institutional trends. We risk jeopardizing the freedom of more people like Nijeer Parks. We must refuse to remain silent — we must encode justice.

Want more from Teen Vogue? Check this out: How Your Computer Reinforces Systemic Racism