Skip to content

The Lavender program

The machine identifying “human targets” for assassination in Gaza


Djuna Schamus details the dangerous artificial intelligence and mass surveillance tools Israel is lethally deploying against Palestinians under the Lavender program.

In April 2024, amid the ongoing assault on Gaza, the Israeli journalist Yuval Abraham published a comprehensive +972 Magazine article unveiling details of the Israeli military’s use of artificial intelligence (“AI”) in generating “human targets” in Gaza. The AI program—called “Lavender”—is capable of rapidly processing a massive amount of data collected on nearly the entire population of Gaza to produce potential individual human “targets” for assassination, those whom the machine identifies as possible members, including low-ranking operatives, of the military wings of Hamas or the Palestinian Islamic Jihad (PIJ).

Six sources from within the military disclosed details about this program, including the fact that the army authorized officers to automatically adopt Lavender’s kill lists, targeting people identified by Lavender for assassination without independently evaluating the machine’s determinations despite the awareness that at least one in every ten people are selected in “error.” At Lavender’s peak it identified 37,000 people in Gaza for potential assassination. Officials then “linked” those identified by Lavender with location tracking programs—including one named “Where’s Daddy?”—allowing the army to strike people in their homes, killing them and often their entire families.

To grasp how a program like Lavender could be built and used to kill thousands of people, it is important to understand the context in which it arose. In many ways, Lavender is the culmination of a long history of Israel’s control of all aspects of Palestinian life (and death) through separation, surveillance, and systemic discrimination. To create an AI-powered “target-generating” program, it is necessary to have an enormous amount of data about the targeted population. This Israeli kill system must be situated within the historical context of decades of occupation, apartheid, and dehumanization that have preceded it.

Since its 2005 “disengagement” from Gaza, Israel has aimed to exercise what anthropologist Darryl Li has described as “maximum control and minimal responsibility” over the lives of Palestinians there. As part of this relationship of domination—and central to its ability to collect data on Palestinian people—Israel has long maintained control over the Palestinian population registry and ID card process in occupied Gaza, the West Bank, and East Jerusalem. Additionally, separating and territorially fragmenting Palestine has been crucial to Israel’s ability to control the lives of Palestinians, and the vast checkpoint infrastructure the state has established plays a key role in the mass surveillance of and dominance over Palestinians throughout the Occupied Territories.

Israel’s control over Palestinian life and death is intimately linked to its complete control of Palestinian movement and space. Without a special travel permit from Israel, it is illegal for Palestinians in the occupied West Bank to travel to Gaza and Jerusalem or for Palestinians in Gaza to travel to the West Bank and Jerusalem. To travel from Gaza to any other part of Palestine-Israel (or to travel between towns within the occupied West Bank), Palestinians must move through militarized Israeli-controlled borders and checkpoints. There are more than 30 checkpoints that connect Gaza and the West Bank to Israel, and according to the United Nations there are more than 645 “movement obstacles” within the West Bank itself.

These border/frontier sites are central to Israel’s mass surveillance apparatus. The checkpoints have over the years become progressively high-tech, collecting more and different kinds of data on the Palestinian people traversing them. As Antony Loewenstein details in his recent book The Palestine Laboratory, Israel has long developed novel technologies of war and surveillance and tested them on “unwilling subjects,” the Palestinian people. “Palestine is Israel’s workshop,” Loewenstein writes, “where an occupied nation on its doorstep provides millions of subjugated people as a laboratory for the most precise and successful methods of domination.”

For instance, in its 2023 report “Automated Apartheid: How Facial Recognition Fragments, Segregates and Controls Palestinians in the OPT,” Amnesty International describes in detail the Israeli military’s use of facial recognition technology as a tool of mass surveillance and an additional barrier to Palestinian movement. The report expounds upon a number of different facial recognition programs that the IDF employs, including “Red Wolf,” a system utilized at checkpoints in Hebron. The report, drawing from the testimony of IDF soldiers, describes how Red Wolf works, explaining that

As an individual enters the checkpoint and is kept in place with cameras facing them, their picture is taken and they are, according to testimonies gathered by Breaking the Silence, assessed against the information available on record and—subject to whether they have permission to pass, or whether they are due to be arrested or questioned—they are either allowed through or barred from moving to the exit turnstiles of the checkpoint.

The report further describes how an individual’s biometric data is added to the database, noting that if the system does not recognize a face, a soldier at the checkpoint will attach the face to the ID until the Red Wolf system “learns” to recognize the person. Furthermore, “if a biometric entry does not exist on the individual in question, they are biometrically enrolled into the Red Wolf system, without their knowledge and consent.” Over time, then, the database of Palestinian faces within the system expands. Since the implementation of Red Wolf, Palestinians have reported that soldiers were able to identify them without their presenting identification cards.

The Amnesty Report also explains that Red Wolf is linked to additional, larger databases containing more information about the Palestinian people within its system. Unlike Red Wolf, which is employed specifically at checkpoints, the Blue Wolf application is mobile, allowing soldiers to utilize the facial recognition system via their smartphones. Soldiers testified that they were encouraged to take and upload as many photographs of Palestinian people as possible to the application, “creating new exclusively Palestinian biometric entries” and creating or adding to Palestinians’ profiles. Once this biometric data is added to the app, individuals can be identified quickly.

While the use of Blue Wolf and Red Wolf have been highly documented in the West Bank, until very recently there were no reports of facial recognition technology being deployed in Gaza. In March of this year, however, the New York Times reported that Israel had adopted a sweeping and “experimental” facial recognition program in Gaza starting in late 2023. Reporter Sheera Frenkel gathered information from anonymous sources within the IDF, including intelligence officers, military officials, and soldiers. According to these sources, the facial recognition technology was initially employed by the army to identify Israelis who were taken hostage on October 7, 2023.

In the following weeks and months, however, the facial recognition program was increasingly used by Israel to identify people with ties to Hamas or other militant groups, particularly to identify faces in drone footage. Eventually, more and more cameras were positioned throughout Gaza. Camera-filled checkpoints were set up on roads within Gaza, scanning the faces of Palestinians fleeing areas under heavy Israeli bombardment. Israeli soldiers were also given cameras that were connected to the technology to have in hand. As Frenkel wrote, “The expansive and experimental effort is being used to conduct mass surveillance there, collecting and cataloging the faces of Palestinians without their knowledge or consent.”

The program relies in part on technology developed by Corsight AI—a facial recognition technology company headquartered in Israel—in the wake of October 7. Although on its website the company boasts that its technology is exceptionally effective at detecting and recognizing faces, sources disclosed to The New York Times that Corsight technology “struggled” to identify faces “if footage was grainy and faces were obscured,” further explaining that “there were also false positives, or cases when a person was mistakenly identified as being connected to Hamas.” Soldiers discovered that Google Photos was better able to identify people whose faces were only partially visible, so they supplemented the Corsight technology with Google’s.

The experience that poet Mosab Abu Toha endured while passing through one of these military checkpoints in Gaza highlights some of the many human rights concerns raised by these programs. Without his knowledge or consent, Abu Toha had walked within the range of cameras that were capturing the biometric data of Palestinians passing by; his face was scanned by the facial recognition technology, and he was flagged as a “wanted person.” He describes the harrowing experience in a New Yorker essay:

They’re not going to pull me out of the line, I think. I am holding [my three-year-old son] and flashing his American passport. Then the soldier says, “The young man with the black backpack who is carrying a red-haired boy. Put the boy down and come my way.”
He is talking to me.

Abu Toha discusses the confusion he experienced when the soldiers knew his name, even though he had not shared it with them or shown his ID. The poet was then taken away from his family, blindfolded, and transported to an Israeli detention center where he was beaten and interrogated for two days before being returned to Gaza without any further explanation for his detention and torture.

The human rights concerns that Blue Wolf, Red Wolf, and the novel Gazan facial technology program introduce are immense and wide-ranging. As Abu Toha’s experience makes clear, the implications of being “flagged” by one of these systems can be substantial, implicating someone’s liberty—and perhaps even their life. Importantly, these facial recognition technologies are not applied to Israeli citizens (including settlers in illegal settlements in the West Bank) or to most foreign nationals.

This system of mass surveillance and control is fundamental to the functioning of a program such as Lavender, as it must rely on a massive amount of data. Before Lavender “learns” to identify possible “human targets,” it must first be “trained.” While the exact factors that go into the use of Lavender’s training sets are not fully known, sources disclosed the basic process through which Lavender “generates targets.”

Lavender is trained on the data of people (ostensibly “known” to be associated with the military wing of Hamas or Palestinian Islamic Jihad (“PIJ”), thereby “learning” to recognize these “features” of militants in other citizens of Gaza. The “training” of Lavender on what “features” correspond to membership in a militant group raise foundational issues and questions regarding how a “Hamas operative” or militant is defined, and accordingly, whose data is used. Part of teaching the system about how to flag people potentially connected to Hamas consists in deciding which people’s data will be used to train Lavender.

An anonymous source who spoke with Yuval Abraham and worked with the IDF’s data science team responsible for training Lavender, observed, “I was bothered by the fact that when Lavender was trained, they used the term ‘Hamas operative’ loosely, and included people who were civil defense workers in the training dataset.” For example, the Lavender training team used the data collected from employees of the Internal Security Ministry to teach the system. The source explained the choice to include data from government workers could have significant consequences, leading to the system more frequently selecting civilians as “targets.”

Another source commented:

How close does a person have to be to Hamas to be [considered by an AI machine to be] affiliated with the organization? It’s a vague boundary. Is a person who doesn’t receive a salary from Hamas, but helps them with all sorts of things, a Hamas operative? Is someone who was in Hamas in the past, but is no longer there today, a Hamas operative?

In addition to determining whose data is input as examples of “targets,” determinations must be made regarding what data is input and the significance of these “features.” Israeli Brigadier General Yossi Sariel, in his pseudonymous book The Human-Machine Team, wrote of the data input into the program: “The more information, and the more variety, the better,” including “visual information, cellular information, social media connections, battlefield information, phone contacts, photos.”

The list of specific “features” input as bearing on someone’s apparent connection to a militant group have not been publicized, but in his book, Sariel listed being in a messaging group with a “known militant,” changing cell phones every few months, and moving addresses frequently. Sariel further explained that while human officers must first determine what “features” correlate to membership in a militant group, the AI system will ultimately discern features independently.

While it is difficult to fully determine how these predictive AI models are trained without having access to more details on what supposedly incriminating “features” are being input and how they are weighted, the few details provided by Sariel about damning “factors” raise deep concerns about the potential for human rights violations posed by these programs.

As it sorts and weights these ever-changing data sets, the Lavender program generates a “rating” from 1 to 100 for each person assessed, ranking how apparently likely it is that an individual is actively affiliated with the military wing of Hamas or PIJ. Sources informed the +972 team that nearly every person in Gaza was analyzed by the Lavender system and given a rating, which would then determine whether they are a potential “human target.”

The number of potential targets Lavender generated for the IDF’s use depended on where the “threshold” rating at which to attack a particular person was placed. Lowering the rating threshold, therefore, led to more people being marked for execution. The speed of generating “new targets” is unprecedented. In his +972 article “A Mass Assassination Factory,” Yuval Abraham reported that Former IDF Chief of Staff Aviv Kochavi explained that in the past—and before the use of AI—around 50 “targets” per year were established by the IDF. In an interview with The Jerusalem Post, the chief of the “target bank” explained that, with the help of AI, the IDF was able to “cross the point where they can assemble new targets even faster than the rate of attacks” for the first time. Before this technology was developed, the IDF apparently “ran out” of what the Jerusalem Post referred to as “quality targets” after a few days.

After a potential target was identified by Lavender, officers would have to determine whether to assassinate the person who was given a high rating by the AI program. Sources reported to Abraham that—at first—Lavender was used as an auxiliary tool, but that about two weeks after October 7, the army authorized officers to automatically adopt Lavender’s kill lists. After the decision to adopt Lavender’s “kill lists” wholesale, there was essentially no human substantiation mandated. In fact, officers were told to treat Lavender’s decision as an order. In other words, the IDF licensed the blanket and unquestioning adoption of Lavender’s kill lists with no attempt at manual and human verification.

The only “check” that sources described making prior to bombing a particular person was making sure that the person Lavender identified was male (as there are no women affiliated with the military wing of Hamas). Because the only verification that officers conducted was checking if they believed that someone “has a male or a female voice” (ostensibly by listening through their phones), there was no one determining whether the person was actually affiliated with—let alone an operative in—the military wing of Hamas. Furthermore, it appears as though much of the tracking of individuals was done using their phones, and therefore if a “target” identified by Lavender gave their phone to someone else, that person could be targeted and killed without anyone verifying whether they were even the specific target identified by Lavender.

The near-absolute adoption of Lavender’s list of “human targets” by the final (human) decision-makers points to numerous concerns that human rights advocates and scholars have expressed about the use of AI and the lack of significant human oversight. Many proponents of the use of predictive algorithmic models focus on the fact that a final human decision-maker acts as a “safeguard,” and that these systems are meant to assist human judgment, not replace it. However, as the unquestioning adoption of Lavender’s “kill list” makes clear, a human having the “final say” is not an adequate check on the ethical concerns raised by the use of AI in these contexts.

While some have suggested that AI can be used to protect civilian life in armed conflict, Israel’s bombardment of Gaza over the past nine months belies this claim. As of July 8, 2024, the Gaza Health Ministry estimates that more than 38,000 Palestinians have been killed and more than 87,000 have been wounded by Israeli attacks in Gaza, many of them women and children. Many thousands more are still missing, trapped underneath the rubble.

The unfathomable killing of so many people over nine months is in part due to how the IDF has decided to use “target-generating” programs like Lavender. This is partly because at the beginning of its current attack on Gaza the Israeli military decided that, as Yuval Abraham reported, “for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians.” For a high-level Hamas leader, one wonders what number of civilians would be considered legitimate collateral damage.

The understanding that singular targets have been used as a pretext to kill many people is underscored by the way that Lavender interacts with an automated tracking system that the Israeli army dubbed “Where’s Daddy?” The “Where’s Daddy?” system and others like it were used to track people identified by Lavender, “link” those individuals to their family homes, and automatically alert the “targeting officer” when the person entered their home, at which point the homes would be bombed. Effectively, the army was waiting for people identified by Lavender to enter their homes before attacking them, radically increasing the chance that entire families would be killed. The choice to name the location tracking program “Where’s Daddy?” points to the disturbing reality that the IDF intended to kill children upon their father’s return home, and that this was made possible and palatable by these automated programs.

One source described how straightforward it was to add the people identified by Lavender into the location tracking program and how it could be done by low-ranking officials in the IDF. Once these names were added and placed under ongoing surveillance, their homes could be bombed when the cell phone linked to the “target” entered the family dwelling. The source explained to Abraham, “one day, totally of my own accord, I added something like 1,200 new targets to the [tracking] system, because the number of attacks [we were conducting] decreased. That made sense to me. In retrospect, it seems like a serious decision I made.”

These accounts underscore the deadly consequences of such rapid and unthinking decisions: The technology allows for officials—far removed physically and emotionally from the scene—to impose mass death and destruction within a few seconds. The result of this reckless campaign has been the death of tens of thousands of civilians and the erasure of entire extended families from Gaza’s population registry.

In “A World Without Civilians,” Elyse Semerdjian writes, “[T]oday’s ‘First AI War’ is a continuation of Gaza used as a laboratory for necrocapitalism, where weapons field tested on Palestinians fetch higher dollars at market that, in turn, enriches the weapons industry and politicians with ties to it.” In The Palestine Laboratory, Loewenstein warns that Israel has been able to insulate itself from accountability and political backlash against its human rights abuses and “endless occupation” due, in large part, to the nation’s practice of selling its state-of-the-art military technology across the globe. This raises the very real concern that murderous AI technology like Lavender and “Where’s Daddy?” will be spreading to other countries around the globe.

The immense loss of life and disregard for human rights that have been laid bare over the past nine months during Israel’s genocidal campaign in Gaza have put the world on notice that to do business with Israel is to aid in the commission of war crimes and that this kind of use of AI in warfare can have catastrophic outcomes. The mass student protests across the globe, the rise of the Boycott, Divestment, Sanctions (BDS) movement, and the growing coalition of technology workers speaking out against their employers in organizations such as No Tech for Apartheid provide hope that Israel’s “Palestine laboratory” is being revealed as the unethical death-making machine that it is.

We want to hear what you think. Contact us at editors@tempestmag.org.
And if you've enjoyed what you've read, please consider donating to support our work:

Donate

Djuna Schamus View All

Djuna Schamus is a recent graduate of New York University School of Law and is based in Brooklyn, New York. She will begin a fellowship focused on PIC abolition in the fall.