The journey to protecting children online will soon clash with an equal and opposite political force: the criminalization of abortion. In a country where many states will soon treat fetuses as children, surveillance tools designed to protect children will be exploited to target abortion. And one of the biggest threats to reproductive freedom will come unintentionally from its staunchest defenders in the European Union.
Last week, the EU introduced a draft regulation that would effectively ban end-to-end encryption and force Internet companies to look for abusive materials. Regulators would not only require chat application makers to scan all messages for child sexual abuse material (CSAM), a controversial practice that companies like Meta already do with Facebook Messenger, but would also require platforms to scan every sentence of every message to look for illegal activities. These rules would affect anyone who uses a chat application company doing business in the EU. Virtually all American users would be subject to these scans.
Regulators, businesses and even unconditional opponents of surveillance on both sides of the Atlantic have framed CSAM as a unique threat. And while many of us may point to a future in which algorithms magically detect harm to children, even the EU admits that scanning would require “human supervision and review.” The EU does not address the mathematical reality of encryption: if we allow one surveillance tool to target one set of content, it can easily be targeted to another. This is how these algorithms can be trained to target religious content, political messages, or abortion information. It’s exactly the same technology.
Earlier child protection technologies provide us with a warning story. In 2000, the Child Internet Protection Act (CIPA) required federally funded schools and libraries to block content that is “harmful to children.” More than 20 years later, school districts from Texas to progressive Arlington, Virginia, have taken advantage of this legislation to block sites for Planned Parenthood and other abortion providers, as well as a broad spectrum of progressive, anti-racist, and LGBTQ content. . Congress never said accurate medical information about abortion is “harmful material,” but that is the claim of some states today, even with Roe still in the books.
Post-Roe, many states will not only treat abortion as child abuse, but in several states probably as a murder, prosecuted to the fullest extent of the law. European regulators and technology companies are unprepared for the impending civil rights catastrophe. Regardless of what companies say about pro-election values, they will behave very differently when faced with a court order against the election and the threat of imprisonment. An effective end-to-end encryption ban would allow U.S. courts to force Apple, Meta, Google, and others to search for abortion-related content on their platforms, and if they refused, they would be despised.
Even with abortion still constitutionally protected, police are already prosecuting pregnant women with all the surveillance tools of modern life. As Cynthia Conti-Cook of the Ford Foundation and Kate Bertash of the Digital Defense Fund wrote in a Washington Post Last year, “The use of digital forensic tools to investigate pregnancy outcomes … poses an insidious threat to our fundamental freedoms.” Police use search histories and text messages to charge pregnant women with murder after death. It is not only an invasive technique, but it is very prone to errors, which easily mistake medical questions as proof of criminal intent. For years, we’ve seen digital payment and purchase records, even PayPal history, used to arrest people for buying and selling abortions like mifepristone.
Pregnant people should not only worry about the companies that currently have their data, but about all the others to whom they could sell them. According to a 2019 lawsuit I helped file against data agent and news service Thomson Reuters, the company sells information about millions of U.S. abortion histories to police, private companies, and even all to the U.S. Immigration and Customs Enforcement (ICE). Some state regulators are even sounding the alarm, such as a recent “consumer alert” from New York State Attorney General Letitia James, who warns how period tracking apps can be used, text messages and other data to guide pregnant people.
We need to re-evaluate all surveillance tools (public and private) taking into account pregnant people who will soon be targeted. For technology companies, this includes reviewing what it means to promise the privacy of their customers. Apple has long garnered praise for how it protected users’ data, especially when it went to federal court in 2016 to oppose government lawsuits that hacked a suspect’s iPhone. His position of hard-line privacy was especially evident because the court order came as part of a terrorism investigation.
But the company has been much less willing to take on the same struggle when it comes to CSAM. Last summer, Apple proposed embedding CSAM surveillance on all iPhones and iPads, searching for content on its billions of devices. The Cupertino giant quickly recognized what the National Center for Missing and Exploited Children for the first time called “the striking voices of the minority,” but never completely gave up the effort, recently announcing the CSAM scan for to users in the UK. Apple is not alone, joining companies like Meta, which not only actively scans the content of unencrypted messages on the Facebook platform, but also evades “end-to-end encryption” claims to control messages on Facebook. the WhatsApp platform by accessing decrypted and decrypted copies. marked by users. Similarly, Google incorporates CSAM detection into many of its platforms, reporting hundreds of thousands to authorities each year.