The AI War on Iran: Project Maven, a Secretive Palantir-Run System, Helps Pentagon Pick Bomb Targets

Democracy Now

The Trump administration says the United States has struck 11,000 targets in Iran since the U.S.-Israeli war on the country began. Critics have questioned the accuracy of the Maven system, the artificial intelligence system used by the military to speed up the process of identifying targets.

“Imagine Google Earth for war, a map of war with white dots, infused with information like elevation, coordinate, what is precisely there, whether it’s friendly or foe,” says Katrina Manson, a reporter for Bloomberg News and author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare.

The Pentagon launched Project Maven in 2017. Google was an initial partner, but the company pulled out after over 3,000 Google employees signed a letter opposing the work. The big data firm Palantir then took over the project and has run it ever since.

Transcript

AMY GOODMAN: This is Democracy Now! I’m Amy Goodman, with Juan González.

As the U.S. and Israeli war on Iran enters its 32nd day, we turn now to look at how artificial intelligence is reshaping how wars are fought. The Trump administration says the U.S. has struck 11,000 targets in Iran since the war began. The military has largely relied on an AI system known as Project Maven to speed up the process of identifying targets. But critics have questioned the accuracy of the AI system. The Pentagon is now investigating if Project Maven played a role in the U.S. strike on the Iranian girls’ school that killed over 170 people, mainly children.

The Pentagon launched Project Maven in 2017. Google was an initial partner, but the company pulled out after over 3,000 Google workers signed a letter opposing the work, saying, quote, “We believe that Google should not be in the business of war.” The big data firm Palantir then took over the project and has run it ever since. This is Palantir’s chief technology officer, Shyam Sankar, speaking on Bloomberg earlier this month.

SHYAM SANKAR: Current operations are ongoing, but I think people will reflect back and say this is the first large-scale combat operation that was really driven, enhanced, made substantially more productive with technology, with AI. … If you think about Gulf War I, I think — or, Gulf War II, sorry. Gulf War II, we did about a thousand targets. It was six months of planning for roughly 50 to a hundred people. And in this conflict, you’re looking at that equivalent of work for twice as many targets was done by one person in two weeks. So, how do we give our service members that Iron Man suit? We’re making them 50 times more productive than the adversary.

ED LUDLOW: Did you say “Iron Man suit”?

SHYAM SANKAR: Yeah, the conceptual Iron Man suit, right? Like, how do I make them superhuman?

AMY GOODMAN: Officer Shyam Sankar.

We’re joined now by Katrina Manson, award-winning journalist at Bloomberg, author of the new book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare.

So, why don’t you lay it out? What is Project Maven, the companies involved, the revolts, like the thousands of Google workers who said no?

KATRINA MANSON: Project Maven started in 2017 at a time the Pentagon was increasingly concerned about the technological capabilities that China’s military was developing — not only China was spending more on defense at a time that the U.S., of course, maintained the biggest budget on defense of any country in the world — but was also aware that China had been studying America’s military weak points. The people who backed Project Maven believed that AI could be a route towards autonomy and to help the U.S. speed up the scale and speed of warfare. And also, for its backers, they hoped that AI could deliver accuracy, protect civilians and protect U.S. troops and allied troops.

Of course, there was a huge backlash from campaigners who thought it would do the opposite, that AI could harm civilians, that there could be atrocities in such a future era of warfare. What’s so difficult about those arguments is they happen largely at the philosophical level. And so, I’ve spent several years now trying to find concrete examples of the way U.S. AI has actually been used on the battlefield, of moments where algorithms haven’t been able to successfully identify objects on the battlefield, but also where, with tweaks, they have. And so I’ve tried to come to this picture of what exactly this tool is.

Cut to today, the U.S. has said it is using a variety of AI tools in its operations against Iran. I’ve reported that includes Maven Smart System. That’s the platform, if you imagine Google Earth for war, a map of war with white dots, infused with information like elevation, coordinate, what is precisely there, whether it’s friendly or foe. That is the system that the U.S. is using now, in a widespread way, as a common operating platform, or even, some people call it, AI mission control. And in just the first 24 hours of U.S. operations against Iran, the U.S. publicly announced they had struck a thousand targets. Now, that’s not exclusively down to AI, but AI is helping rifle through data, offer ways of identifying and selecting targets, and then pairing it with what the U.S. considers the most appropriate weapon.

You raised the question of the companies that have supported this. At the outset, there was Google. The Pentagon was facing this problem. They’re used to, of course, making weapons of war, relying on big companies like Lockheed Martin and others. This was a moment where those companies were not at the cutting edge of AI. And the Pentagon felt that people had phones, they were using AI in their daily lives, but they weren’t using it at the Pentagon. And so they reached out to several startups. There was a New York startup that got involved that was very successful at identifying objects. It used to do a wedding blog. It would identify concentric circles of a cake. It switched to doing weapons of war. So, there was this big effort to —

AMY GOODMAN: What company was that?

KATRINA MANSON: Clarifai. Clarifai. And they did encounter problems for their own workforce. Several people decided they were not comfortable working on this project. The leaders of Clarifai told me they were convinced by a U.S. colonel, who led Project Maven as chief, who made the argument that precision, accuracy, that he believed AI could bring by helping humans do a better job of war, would deliver a better battlefield result. And he also introduced me to other people, one of whom explained that the U.S. regularly makes mistakes at war. And so, many carried this burden and wanted to achieve a better result for those aims.

Some of the other companies that ended up working on the project once Google pulled out, or they decided not to renew the contract, was Palantir, which starts making this interface, Maven Smart System, which is being used today and which I reported that is going to become a program of record by the end of September, which means it will have a consistent funding stream from Congress and be even more widely used than it is today. They have more than 25,000 accounts in use across the U.S. military in every single combatant command or regional theater.

But the actual AI is made by companies that people will be familiar with: Microsoft, Amazon Web Services and others. And they have, over the years, struggled to correctly identify what is actually in front of them. In one early case — and, of course, I should be clear that an early case is not representative of where these algorithms are today, but I think it shows the struggle. In one early case, Microsoft decided that it was too hard to identify subclasses. And actually, drone screeners, who look at video footage and can identify by eye, are very, very practiced in this. They can identify the difference between a weapon on someone’s shoulder and, let’s say, a grocery basket, because they’ve been doing it so long. These algorithms were not able to do that. Sometimes it can be down to as little as three pixels on an image. And so, Microsoft decided, “Let’s just do two classes: people and vehicles.” The problem with that is this claim made for accuracy. The aim was to be able to identify between men, women and children, and an algorithm that just said “person” clearly wasn’t getting the U.S. military closer to being able to do that critical function under the laws of war, which is distinguish who actually might be being targeted.

Now, those algorithms improved. And one of the main moments where they start to improve is in U.S. support to Ukraine. So, in 2022, when Russia invades Ukraine, there’s a huge effort to try and find Russian objects, military objects, for the Ukrainians to target. And the U.S. starts sharing what they call points of interest. This is everything short of a target, because the U.S. didn’t want to be identified as a participant, direct participant, in the war. But the enormous effort to share intelligence was able to, I’m told, in a story I relate in the book, identify Russian mobile missile launchers far better than the Ukrainians could. And in one case, the U.S. identifies what it says is a mobile missile launcher, tells the Ukrainians, and 18 minutes later, the Ukrainians are able to hit it.

JUAN GONZÁLEZ: Katrina Manson, I wanted to ask you — this whole issue of the worshiping of technology in warfare. When we were in the first Gulf War and in the second Gulf War, we were shown all of these smart bomb hits that the United States basically disseminated across the world to show the superiority of its weapons. What is the accuracy, from what you can tell, of how Project Maven identifies and hits targets that are actually military targets?

KATRINA MANSON: The U.S. has this extraordinary arsenal. It can bring to bear firepower as no one else in the world can. Where it has struggled, for years, decades, is knowing exactly where to put that firepower. That decision is down to — in the aims of the colonel who led Project Maven as chief of the project, there was a fundamental lack of intelligence on the battlefield. So, although there was this great ability to hit targets, it would only ever be as good as the information feeding that. And regularly the U.S. has struggled to do that. In Afghanistan, soon after 9/11, the Marine colonel whose story I tell in the book is deployed to Afghanistan, and he tells a story about carrying a computer, a huge computer, with him on the seat beside him the helicopter. And that really was the only tool he had to help the team understand what threats they thought they would be encountering. And, of course, as you know, the U.S. very quickly started experiencing improvised explosive devices. And there’s a record of civilian harm in that war, as well. And so, his effort has always been to bring better information to the people who are risking their necks in the name of U.S. national security and to try and reduce civilian harm. That process of bringing intelligence into battlefield operations is one that continues to be complex. As the U.S. investigates what happened with the strike against the girls’ school in Iran, campaigners and others inside the military will be really looking for accountability there, for an accounting of what went wrong, if it is a U.S. strike — that hasn’t been publicly confirmed yet — and what data the U.S. military was drawing on.

AMY GOODMAN: We just have 20 seconds, but Marine Colonel Cukor’s wife didn’t really agree with him.

KATRINA MANSON: She wanted war to be fair. She was really against war as a person. She wanted to know her husband had tried to do the right thing. And she struggled with that, as did he. And even he, in his parting shot to me in the book, acknowledges there are dark parts to this AI technology, and he wants the U.S. to be a responsible custodian of it.

AMY GOODMAN: We want to thank you for being with us, Katrina Manson, author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare. She’s an award-winning journalist at Bloomberg.

A very happy birthday to Mike Burke! To see our 30th anniversary celebration with Angela Davis and Bruce Springsteen, with Patti Smith and Michael Stipe and the Pulitzer Prize-winning poet Mosab Abu Toha, the playwright V and others, go to democracynow.org. And check out our travel in the coming days as Steal This Story, Please!, the documentary about Democracy Now!, travels the country, starting April 9th here in New York. I’m Amy Goodman, with Juan González.