If Tech Fails to Design for the Most Vulnerable, It Fails Us All


What do the Russians do? Do protesters have anything in common with Twitter users who are scared that Elon Musk is reading his text messages and with people concerned about the criminalization of abortion? It would be useful for all of them to be protected by a stronger set of design practices from technology companies.

We make a backup. Last month, Russian police coerced protesters into unlocking their phones to look for evidence of dissent, prompting arrests and fines. Worst of all, Telegram, one of the leading chat applications used in Russia, is vulnerable to such searches. Even having the Telegram app on a personal device may mean that its owner does not support the Kremlin war. But the builders of Telegram have failed to design the app with personal security in mind in high-risk environments, and not just in the Russian context. Thus, Telegram can arm itself against its users.

Similarly, in the midst of the round trip on Elon Musk’s plan to buy Twitter, many people using the platform have expressed concern about its offer to advance the moderation of algorithmic content and other changes to whimsical design of his $ 44 billion fantasy. Bringing recommendations from someone without risk framework and harm to the very marginalized leads to proclamations of “authenticate all humans”. It seems like a push to eliminate online anonymity, something I’ve written about personally. It is poorly thought out, harmful to people most at risk, and is not supported by any real methodology or evidence. Beyond its unclear outbursts of change, Musk’s previous actions combined with the existing damage to Twitter’s current structures have made it clear that we are heading for more impacts on marginalized groups, such as black users and Twitter POCs. trans people. Meanwhile, the lack of security infrastructure is hitting the U.S. hard since the leak of the Supreme Court’s draft opinion on Dobbs vs. Jackson proving that the protections provided in Roe against Wade they are mortally threatened. With the projected criminalization of people seeking or providing abortion services, it has become increasingly clear that the most widely used tools and technologies for accessing vital health care data are unsafe and dangerous.

The same steps could protect users in all these contexts. If the creators of these tools had designed their applications focusing on security in high-risk environments, for people who are often seen as the most “extreme” or “avant-garde” cases and therefore ignored, the weapons that users fear would not be possible. , or at least they would have tools to manage their risk.

The reality is that making technology better, safer and less harmful requires a design based on the lived realities of the most marginalized. These “extreme cases” are often ignored because they are beyond the reach of a typical user’s likely experiences. However, they are powerful indicators for understanding the shortcomings of our technologies. That is why I refer to these cases — of people, groups, and communities that are most affected and least supported — as “decentralized.” The decentralized are the most marginalized and often the most criminalized. By understanding and establishing who is most affected by different social, political, and legal frameworks, we can understand who would likely be a victim of the armament of certain technologies. And, as an added benefit, technology that has refocused the extremes always be generalizable to the wider user.

From 2016 to the beginning of this year, I led a research project on the Article 19 human rights organization together with local organizations in Iran, Lebanon and Egypt, with the support of international experts. We explored the experiences of queer people facing police harassment as a result of the use of specific personal technologies. Take the experience of a queer Syrian refugee in Lebanon who was detained at a police or army checkpoint for papers. They searched for their phone arbitrarily. The icon of a queer application, Grindr, is seen, and the officer determines that the person is queer. Other areas of the refugee phone are then reviewed, revealing what is considered “queer content.” The refugee is being held for further interrogation and subjected to verbal and physical abuse. They are now facing sentences under Article 534 of the Lebanese Penal Code and are facing possible imprisonment, fines and / or revocation of their immigration status in Lebanon. This is a case in point.

But what if this logo was hidden and an app indicating a person’s sexuality was not readily available to them? Despite letting the individual keep the app and connecting with other people queer? Based on research and collaboration with the Guardian project, Grindr worked to implement a stealth mode on his product.

The company also implemented our other recommendations with similar success. Changes such as the discrete app icon allowed users to see the app as a common utility, such as a calendar or calculator. Therefore, in an initial police search, users can avoid this risk of being left out by the content or images of the applications they have. Although this feature was created solely from the results of extreme cases, such as the queer Syrian refugee, it was popular with users around the world. In fact, it has become so popular that it has gone from being fully available only in “high risk” countries to being available internationally for free in 2020, along with the popular PIN feature that was also introduced in this project. This was the first time that a dating app took such drastic security measures for its users; many of Grindr’s competitors followed suit.





Source link

Leave a Reply