If we are to believe in the providers of school surveillance systems, K-12 schools will soon operate in a manner similar to a certain agglomeration of Minority report, Person of interesti Robocop. “Military-grade” systems would absorb data from students, collecting mere evidence of harmful ideas and sending officers before potential perpetrators could carry out their vile acts. In the unlikely event that someone could evade predictive systems, it would inevitably stop with next-generation weapon detection systems and biometric sensors that interpret a person’s gait or tone, warning authorities of imminent danger. The final layer could be the most technologically advanced: some form of drone or perhaps even a robot dog, which could disarm, distract or deactivate the dangerous individual before any real damage is done. If we invest in these systems, according to the line of thinking, our children will finally be safe.
Not only is this not our present, but it will never be our future, however extensive and complex the surveillance systems may be.
In recent years, numerous companies have emerged, all promising a variety of technological interventions that will reduce or even eliminate the risk of school shootings. The proposed “solutions” range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence combined with cameras that determine people’s intent through their body language, to to microphones that identify the potential for violence from a tone of voice. . Many of them use the specter of dead children for the sale of their technology. Surveillance company AnyVision, for example, uses footage from the Parkland and Sandy Hook shootings in presentations with its facial recognition and firearm technology. Immediately after the Uvalde shooting last month, the Axon company announced plans for a taser-equipped drone as a means to deal with school shooters. (Later, the company paused the plan, after members of its ethics board resigned.) The list goes on, and each company wants us to believe that it alone has the solution to this problem.
The failure here is not just in the systems themselves (Uvalde, for example, seemed to have at least one of these “security measures” in place), but in the way people conceive of them. Like the police themselves, every failure of a surveillance or security system causes people to call for broader surveillance. If a hazard is not anticipated or anticipated, companies often cite the need for more data to address the loopholes in their systems, and governments and schools often buy it. In New York, despite numerous failures of surveillance mechanisms to prevent (or even capture) the recent subway shooter, the city’s mayor has decided to redouble the need for even more surveillance technology. Meanwhile, schools in the city are ignoring the moratorium on facial recognition technology. The New York Times reports that U.S. schools spent $ 3.1 billion on security products and services in 202 alone. And recent congressional gun legislation includes $ 300 million more to increase school safety.
But at the root, what many of these predictive systems promise is a measure of certainty in situations over which there can be none. Technology companies constantly present the notion of complete data and therefore of perfect systems, as something that is just above the next edge: an environment where we are so completely monitored that any antisocial behavior can be predicted and, for thus, violence can be prevented. But a complete set of ongoing human behavior data is like the horizon: it can be conceptualized but never reached.
Currently, companies are engaged in a variety of strange techniques to train these systems: some simulated attacks; others use action movies like John Wick, barely good indicators of real life. At some point, however macabre it may seem, it is conceivable that these companies will train their systems with real-world shooting data. However, even if images of actual incidents were available (and in the large quantities required by these systems), the models could not yet accurately predict the next tragedy based on the above. Uvalde was different from Parkland, which was different from Sandy Hook, which was different from Columbine.
Technologies that offer predictions about intent or motivations are making a statistical bet on the probability of a given future from which data will always be incomplete and out of context, regardless of where they come from. The basic assumption when using a machine learning model is that there is a pattern that needs to be identified; in this case, that there is some “normal” behavior that the shooters show at the crime scene. But finding this pattern is unlikely. This is especially true given the almost continuous changes in adolescents ’lexicon and practices. Undoubtedly, more than many other segments of the population, young people are changing the way they speak, dress, write and present themselves, often explicitly to avoid and evade the watchful eye of adults. Developing a consistently accurate model of this behavior is almost impossible.