The Download: China’s social credit law, and robot dog navigation


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s happening in the tech world.

Here’s why China’s new social credit law matters

It is easier to talk about what China’s social credit system is not than what it is. Since 2014, when China announced plans to build it, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there is an opportunity to correct the record.

Most people outside of China assume that it will act like a Black Mirror system powered by technologies to automatically score each Chinese citizen based on what they have done right and wrong. Instead, it is a mix of attempts to regulate the financial credit industry, to allow government agencies to share data with each other, and to promote state-sanctioned moral values, however vague that may sound.

Although the system itself will still take a long time to materialize, by publishing a draft law last week, China is now closer than ever to defining what it will look like and how it will affect the lives of millions of citizens. Read the whole story.

— Zeyi Yang

Watch as this robot dog climbs tricky terrain with just its camera

The news: When Ananye Agarwal took her dog for a walk up and down the steps of the local park near Carnegie Mellon University, other dogs stopped. That’s because Agarwal’s dog was a robot, and a special one at that. Unlike other robots, which tend to rely heavily on an internal map to move around, his robot uses an integrated camera and uses computer vision and reinforcement learning to walk around complicated terrain.

Why it matters: While other attempts to use camera signals to guide robot movement have been limited to flat terrain, Agarwal and his fellow researchers got their robot to climb stairs, climb rocks and jump over gaps. They hope their work will help ease the deployment of robots in the real world and vastly improve their mobility in the process. Read the whole story.

—Melissa Heikkilä

Rely on large language models at your own risk

When Meta released Galactica, a large open source language model, the company was hoping for a big PR win. Instead, all it got was a tirade on Twitter and a scathing blog post from one of its most vocal critics, culminating in its embarrassing decision to pull the public demo of the model after just three days.

Galactica was intended to help scientists by summarizing academic papers and solving math problems, among other tasks. But outsiders quickly pushed the model to provide “scientific research” on the benefits of homophobia, anti-Semitism, suicide, eating glass, being white or being a man, showing not only how its misfire was premature, but also as insufficient AI. Efforts by researchers to make large language models more secure have been. Read the whole story.

This story is from The Algorithm, our weekly newsletter that brings you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

Required readings

I’ve combed the internet to find you the funniest/important/scary and fascinating stories about technology.

1 Verified anti-vax Twitter accounts are spreading health misinformation
And perfectly demonstrating the problem with cashing out verification in the process. (The Guardian)
+ Maybe Twitter wasn’t helping your career as much as you thought. (Bloomberg$)
+ A deepfake of the founder of FTX has circulated on Twitter. (motherboard)
+ Some of the liberal Twitter users refuse to go away. (The Atlantic $)
+ Apparently, Twitter’s layoff bloodbath is over. (The edge)
+ Twitter’s potential collapse could wipe out vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft has completed its lunar flyby 🌒
Paving the way for humans to return to the Moon. (Voice)

3 Amazon’s warehouse monitoring algorithms are trained by humans
Low-paid workers in India and Costa Rica are sifting through thousands of hours of mind-blowing footage. (The edge)
+ The AI ​​data tagging industry is deeply exploitative. (MIT Technology Review)

4 How to understand climate change
Accepting the hard facts is the first step to avoid the saddest end for the planet. (New Yorker $)
+ The world’s richest nations have agreed to pay for global warming. (The Atlantic $)
+ These three charts show who is most to blame for climate change. (MIT Technology Review)

5 Apple discovered the awkward procedures of a cybersecurity startup
He compiled a document illustrating the extent of Corellium’s relationships, including with the notorious NSO Group. (via cable $)
+ The hacking industry is facing the end of an era. (MIT Technology Review)

6 The crypto industry is still feeling nervous
Shares on its biggest stock market have fallen to an all-time low. (Bloomberg$)
+ UK wants to crack down on gamified commercial apps. (FT$)

7 The criminal justice system is failing neurodivergent people
Impersonating an online troll got an autistic man jailed for five and a half years. (Economist $)

8 Your workplace might be planning to scan your brain 🧠
All in the name of making you a more efficient employee. (IEEE spectrum)

9 Facebook doesn’t care if your account is hacked
A series of new solutions to bail out accounts doesn’t seem to have had much of an effect. (WP$)
+ Parent company Meta is being sued in the UK over data collection. (Bloomberg$)
+ Independent artists are building the metaverse in their own way. (motherboard)

10 Why Training Image-Generating AIs on Generated Images Is a Bad Idea
“Contaminated” images will only confuse them. (New Scientist $)
+ Facial recognition software used by the US government reportedly malfunctioned. (motherboard)
+ The dark secret behind these cute AI-generated animal pictures. (MIT Technology Review)

quote of the day

“They seemed to care more before.”

—Amazon Prime member Ken Higgins is losing faith in the company after a series of frustrating delivery experiences, he tells the Wall Street Journal.

The great story

What if you could diagnose diseases with a tampon?

February 2019

On an unremarkable side street in Oakland, California, Ridhi Tariyal and Stephen Gire are trying to change the way women monitor their health.

His plan is to use the blood from used tampons as a diagnostic tool. In this menstrual blood, they hope to find early markers of endometriosis and, ultimately, a variety of other disorders. The simplicity and ease of this method, if it works, will represent a vast improvement over the current standard of care. Read the whole story.

—Dayna Evans

We can still have beautiful things

A place for comfort, fun and distraction in these strange times. (Any ideas? Drop me a line ortweet them to me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haring’s legacy is more visible than ever, 32 years after his death.
+ Even the gentrified world of assembling dinosaur skeletons is not immune to scandal.
+ Pumpkins are a Thanksgiving staple, but it wasn’t always that way.
+ If I lived in a frozen wasteland, I’m pretty sure I’d also be the grumpiest cat in the world.





Source link

Leave a Comment