The military is responding to the call. NATO announced on June 30 that it is creating a $ 1 billion innovation fund that will invest in start-ups and venture capital funds that develop “priority” technologies such as artificial intelligence, big data and automation.
Since the war began, the United Kingdom has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence with an injection of $ 100 billion in cash to the military.
“War is a catalyst for change,” says Kenneth Payne, who directs defense research at King’s College London and is the author of the book. Me, Warbot: The Dawn of Artificially Intelligent Conflict.
The war in Ukraine has added urgency to the drive to push more AI tools on the battlefield. The ones that have the most to gain are startups like Palantir, which hope to take advantage as the military rushes to update their arsenals with the latest technologies. But long-standing ethical concerns about the use of artificial intelligence in warfare have become more urgent as technology becomes more advanced, while the prospect of restrictions and regulations governing its use it seems as remote as ever.
The relationship between technology and the military was not always so friendly. In 2018, following protests and employee outrage, Google withdrew from the Pentagon’s Maven Project, an attempt to build image recognition systems to improve drone attacks. The episode sparked a heated debate about human rights and the morality of developing AI for autonomous weapons.
It also led high-profile AI researchers such as Turing Prize winner Yoshua Bengio and Demis Hassabis, Shane Legg and Mustafa Suleyman, the founders of DeepMind’s main AI lab, to pledge not to work on lethal AI.
But four years later, Silicon Valley is closer to the world’s military than ever before. And it’s not just big companies: startups are finally taking a look, says Yll Bajraktari, who was previously executive director of the U.S. National Security Commission (NSCAI) and now works for in the Special Studies Project on Competition, a group pushing for greater adoption of AI in the U.S.
Companies that sell military AI make expansive claims about what their technology can do. They say it can help with everything from mundane to lethal, from resume selection to satellite data processing or data pattern recognition to help soldiers make faster decisions on the battlefield. Image recognition software can help identify targets. Autonomous drones can be used for surveillance or attacks on land, air or water, or to help soldiers deliver supplies more safely than is possible on land.