“The Palestinian population under military rule is completely exposed to espionage and surveillance by Israeli intelligence.”
(Veterans of Unit 8200, sep.12, 2014)
Source: LinkedIn
Unit 8200, unit 9900, and the 504 unit are the three main units that make up the military intelligence directorate in Israel. The Military Intelligence Directorate is one of the IDF’s oldest directorates, established soon after the creation of the state of Israel. The main objective of the Military Intelligence Directorate is to defend Israel against its enemies by providing intelligence warnings and alerts to the government, IDF, and other relevant parties on a daily basis and during times of war. The Intelligence Corps must effectively utilize its resources and closely monitor terrorist activities, developments in Arab nations, and global technological advancements to enable security personnel to safeguard Israel from threats.
History of Unit 8200 & it’s turning point in 1973
Source: Photography: ICT Corps website, ATC system
The existence of Unit 8200 was hardly acknowledged until a decade ago; its history has never been disclosed or reported, except for fragments. Unit 8200 predates Israel’s war of independence in 1948. Starting in the British Mandate period of the 1930s, what was then known as Shin Mem 2 (an acronym of the Hebrew phrase for information service) tapped phone lines of Arab tribes to gather information about planned riots. In 1948, it was renamed 515—a random number so that it could be discussed without using words. During the second war between Israel and its Arab neighbors in 1956, the name was changed once more, to 848.
The turning point for Unit 8200 occurred in 1973, after the Yom Kippur War, when Israel, surrounded by its enemies, was caught off guard by the invasions of Egypt and Syria—the most significant intelligence failure in its history. Intelligence and national security expert Yossi Melman stated that a Unit 848 officer was captured by the Syrians and shared crucial information with them. This event led to a reorganization, prompting a national reflection. Subsequently, the unit was renamed 8200, a random number. Moreover, it was completely compartmentalized, ensuring that different teams within the unit were unaware of each other’s activities. Similar to a startup, each team operates independently.
More importantly, Israel believed it could no longer risk depending on other parties—especially the tech sector in the United States—to provide access to cutting-edge technologies. As a result, 8200 evolved into the nation’s internal R&D center, the driving force behind Startup Nation, with a rapidly expanding workforce and an increasing role in a more digitally-driven world. Yair Cohen, who spent 33 years in 8200, the last five as its commander, asserts, “90% of the intelligence material in Israel comes from 8200.” While Israel’s Mossad spy agency is renowned, 8200 remains relatively unknown, according to Cohen. 8200’s influence grew alongside its prominence. While most Israelis are required to serve in the IDF by the age of 18, everyone is screened by the IDF as they near high school graduation, and 8200 are free to select anyone they wish.
“Now these kids are on the ‘short list’ for being recruited to 8200.”
(Tovy Stupp, director of the cybertech academic program at Amal Lady Davis High School in Tel Aviv, Jan.22, 2014)
In 2012, Prime Minister Binyamin Netanyahu prioritized cyber-defense, increasing efforts to boost the IDF’s tech manpower. To achieve this, the army initiated two programs: Gvahim, which focuses on training high school students in the country’s core, and Magshimim, which focuses on training youth on the country’s periphery. The most popular training are “Talpiot track, Erezim track, and The Gama Cyber training track.”
Both programs aim to equip students with the knowledge and skills necessary to safeguard Israel’s digital infrastructure against the constant hacking threats it faces daily.
Unit 8200 is the primary information collection unit for the Military Intelligence Directorate, with soldiers responsible for developing and utilizing tools to gather, process, evaluate, and distribute data to the relevant authorities. The unit operates in all zones and works closely with combat field headquarters to facilitate the rapid exchange of information.
Unit 8200 critical role in the “startups nation”
Unit 8200 is recognized for its contribution to the development of Israel’s high-tech expertise. As a consequence, the nation is recognized as a global leader in innovation and has the largest concentration of startups per capita in the world. For example, Viber’s, Wix’ and hundreds of other high-tech startups that have their origins in Unit 8200. There have been many military unit alumni associations in Israel for a long time. The new approach taken by the 8200 was to utilize the group for business development, which included creating a networking website such as linkedin.com, rather than concentrating on preserving the memory of the departed or on nostalgia for bygone glories. Most high-tech & cybersecurity startups are founded by former IDF & specifically unit 8200 alumni & they have collaborated with global high-tech companies & receive funds from them for their continuous innovation. One of the recent deals that’s about to be disclosed publicly is between Nvidia and Run:ai “ a Tel Aviv-based startup” with a fund 700M$.
Source: Calcalist
It appears that the type of projects and programs the military conducts is influenced by the substantial financing it receives in Israel. However, other nations who invest significantly more on their armed forces than Israel perform not quite as well when it comes to harnessing that force for innovation. Taking everything into account, Unit 8200 and IDF have a special configuration, much of which is still shrouded in mystery.
Unit 8200’s & IDF utilization of AI in the ongoing war
“ Human bottleneck for both identifying new targets and making decisions to approve them.”
( Brigadier Y.S,Current commander of the elite Israeli intelligence unit 8200, May 5, 2021)
In 2021,the current commander of the elite Israeli intelligence unit 8200 authored a book titled “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence that will revolutionize our world.” It was released in English under the pen name “Brigadier General Y.S.” The book discusses designing a special machine capable of rapidly processing vast amounts of data to generate thousands of potential “targets” for military strikes during wartime.The Israeli military’s strategy, including unit 8200, relies heavily on various data and artificial intelligence initiatives, essential for achieving victory in combat. These AI initiatives analyze extensive sensor data to convert it into practical intelligence.
”The Gospel, Alchemist, and Depth of Wisdom” were the three AI programs the IDF used in their initial AI operation on Gaza, leading to hundreds of casualties in just 11 days. Further details on the utilization of these programs in the inaugural AI conflict were disclosed in a prior investigation, highlighting the repercussions of their deployment. According to a recent investigation by +972 magazine & Local call, it was found that the Israeli army has created an artificial intelligence program called “Lavender & where’s daddy” to generate a list for targeted killings without the need to verify the reasons behind the machine’s decisions or review the raw intelligence data used.
Six Israeli intelligence officers who served in the army during the recent Gaza Strip conflict and directly observed the utilization of AI to identify targets for attacks, assert that artificial intelligence played a crucial role in the extensive bombing of Palestinians, especially in the initial stages of the war. According to the sources, the AI’s influence on military actions was so significant that they treated the machine’s results almost “as though it were a human judgment” according to +972 magazine investigation.
In the ongoing conflict, the IDF utilizes three distinct AI systems to target and eliminate what they refer to as “Hamas and Palestinian Islamic Jihad (PIJ).” These systems, named Lavender, the Gospel, and Where’s Daddy, each serve a specific purpose. Lavender identifies individuals and adds them to a list for targeting, while The Gospel identifies buildings and structures where the army alleges militants operate. Where’s Daddy is specifically used to track the targeted individuals and execute bombings when they are in their family’s residences.
The utilization of these AI systems led to the deaths of over 37,000 Palestinians, with the majority being women, children, or individuals uninvolved in the conflict, due to Israeli airstrikes, particularly in the initial weeks of the war, as a result of the AI program’s choices.
Source:worldpoliticsreview.com
Lavender and where’s daddy targeting strategy
“It’s much easier to bomb a family’s home. The Lavender system is built to look for them in these situations.”
( A., intelligence officer, April 3, 2024)
According to +972 magazine, the Israeli army went through different stages in the early weeks of the Gaza war using the Lavender & where’s daddy AI system. Human targets, a term used by Israel’s army before Oct.7 as per the regulations of the military’s International Law Department, can be targeted in their private residences even in the presence of civilians. These designated human targets were selected with great precision, and only high-ranking military officials were targeted in their homes to uphold the principle of proportionality as outlined in international law.
However, the focus shifted towards targeting individuals after October 7. The military decided to classify all members of Hamas’ armed wing as potential human targets under “Operation Iron Swords,” regardless of their position or importance within the organization. According to four sources interviewed by +972 and Local Call, Lavender, a tool used to pinpoint human targets in the ongoing conflict, has flagged around 37,000 Palestinians as potential “Hamas militants,” with the majority being young individuals, for potential elimination. The IDF Spokesperson denied the existence of such a list in a statement to +972 and Local Call.
“Once you go automatic, target generation goes crazy.”
( B, senior Intelligence officer, April 3, 2024)
Before authorizing an airstrike in the initial phases of the conflict, the IDF would typically allocate only about “20 seconds” per target to verify if it was a male target identified with lavender. This was carried out despite the system’s tendency to flag individuals with no affiliation or only a weak connection to militant organizations, and to produce what are deemed as “errors” in approximately 10% of instances. About two weeks into the war the IDF soldiers were authorized to automatically adopt Lavender’s kill lists. This authorization occurred after intelligence experts manually verified the precision of a random sample of several hundred targets selected by the AI system. Upon confirming that Lavender’s outcomes could determine an individual’s Hamas allegiance with 90% accuracy in that sample, the army sanctioned the system’s extensive deployment.
Slides from a lecture presentation by the commander of IDF Unit 8200’s Data Science and AI center at Tel Aviv University in 2023, obtained by +972 and Local Call.
As per +972 magazine’s explanation, the Lavender Machine can now identify the characteristics of PIJ and Hamas members by analyzing the data used for training. It then identifies these traits, known as “features,” within the general population. Individuals showing multiple suspicious traits will receive a high rating and be considered potential targets for assassination. The Lavender Machine assigns a rating from 1 to 100 to every individual in Gaza, indicating the likelihood of being a militant based on information from sources interviewed by +972 magazine and local informants.
Once Lavender generates the “kill list,” the IDF officers await the activation of another machine named “Where’s Daddy” to pinpoint the exact moment operatives enter their residences in real-time. This program monitors numerous individuals concurrently, detects their presence at home, and promptly notifies the targeting officer, who subsequently designates the house for bombing.
“As a military committed to upholding moral principles and international law, we are allocating substantial resources to minimize harm to the civilians whom Hamas has forced into serving as human shields. It is not the Gazan people, but Hamas, that is the adversary in our conflict.”
(Major Keren Hajioff, an Israeli spokesperson, Dec.14,2023)
According to three intelligence sources who spoke with +972 and Local Call, “dumb bombs” were the only means of killing junior operators identified as Lavender to save money on more costly weapons. If a junior target lived in a high-rise building, the army would not attack him because they would not want to spend more money and resources on a more expensive and accurate “precision bomb” that would have less of an adverse effect. However, the army was allowed to use a dumb bomb to kill both the junior target and everyone inside the building if the target lived in a structure with only a few floors.
Source: telegraph.co.uk
Lavender’s calculation aren’t 100% accurate
According to a source who used lavender, there was no “zero-error” rule, as stated by +972 magazine. Errors were managed statistically. “The procedure was that even if you are not certain about the accuracy of the machine, you can be confident statistically due to the extent and scale of the data.” Therefore, you are taking a risk. Only 90% of the time were Lavender’s calculations considered accurate; in other words, it was known in advance that 10% of the human targets marked for assassination were not part of the Hamas military wing. However, the lavender machine may have unintentionally identified individuals whose communication styles resembled those of recognized Hamas or PIJ operatives.These individuals may have comprised civil defense and law enforcement personnel, relatives of militants, locals with names and nicknames matching those of operatives, and Gazans utilizing equipment previously owned by a Hamas operative.
Meta’s critical role in linking targets to AI systems
On April 16, 2024, Paul Biggar, a software engineer and devtools founder, wrote on his blog about Israel targeting individuals for being members of the same WhatsApp group as a suspected militant, a detail largely overlooked in the Lavender AI article published by +972 magazine.”We lack evidence to confirm the accuracy of these reports. There are no backdoors on WhatsApp, and we do not provide the government with access to extensive data.” Meta has consistently published reliable transparency reports for more than ten years. Our principles remain unchanged: we carefully examine, verify, and address law enforcement requests in accordance with relevant laws and internationally accepted standards, including human rights. – WhatsApp spokesperson, as reported by Tasnim news agency.
Source: whatsApp
According to Meta’s privacy policy and with end-to-end encryption, it states that “your personal messages stay between you and who you send them to,” which does not seem to be the case in Palestine.
In 2019, Gaza-based journalists were permanently disabled from WhatsApp. Upon contacting the company, the accounts of individuals working for international media organizations were reinstated. Additionally, in May 2021, during the first AI war on Gaza, a group of Palestinian journalists found out they were blocked from accessing WhatsApp messenger — a crucial tool used to communicate with sources, editors, and the world beyond the blockaded strip. According to AP, around 12 out of the 17 journalists mentioned that they were part of a WhatsApp group that shares information regarding Hamas military operations. Only 4 journalists had their accounts restored, but most of their contents & chats were erased. WhatsApp representative response to these incidents is that the company deactivates accounts following its policy “to prevent harm and comply with relevant laws.” The company has been in talks with media organizations for the last week regarding its protocols. They also stated, “If any journalists were impacted, we will restore their accounts.”
“As mentioned during the feature launch, the models may produce inaccurate or inappropriate outputs, as is common with all generative AI systems. We will keep enhancing these features as they develop and as more people share their feedback.”
( Kevin McAlister, a Meta spokesperson , Nov.3, 2023)
With the emergence of generative AI tools, Meta has produced offensive and biased outcomes that dehumanize Palestinians. For instance, the AI image generator on What’sApp generated emojis depicting gun-wielding children in response to ‘Palestinian,’ and Instagram’s AI translation model substituted “Palestinian Thank God” with “Palestinian Terrorist”.
Source: The guardian
Meta is infringing on the Palestinian human rights of freedom of expression, assembly, political participation, and non-discrimination, thus limiting Palestinians’ ability to share information and insights about their experiences as they happen.
Israel’s Artificial Intelligence Regulation & ethics
“We are at the beginning of a new era for humanity, the era of artificial intelligence,”
( Benjamin Netanyahu, Prime Minister of Israel, June, 2023)
In June 2023, Prime Minister Benjamin Netanyahu announced his plans to develop a national strategy for the civil and security applications of artificial intelligence, following discussions with tech mogul Elon Musk and a visit to OpenAI CEO Sam Altman. Both Musk and Altman are prominent figures in the field, which has experienced significant advancements in recent years. The launch of the highly praised ChatGPT artificial intelligence chatbot earlier this year by OpenAI, a company co-founded by Musk, showcased the potential of new artificial intelligence tools and has sparked interest in the technology, along with concerns about the potential risks it may present. According to Netanyahu, Musk suggested that Israel has the potential to lead in artificial intelligence development, while also emphasizing the importance of recognizing the risks associated with the technology.
According to a video featuring Netanyahu on X, he mentioned that he and Musk had “discussed extensively two key points. Firstly, the importance for governments to grasp both the potential benefits and risks of artificial intelligence. Secondly, he shared his belief that Israel has the potential to emerge as a major player on this matter.”
With regards to private sector applications, Israel’s AI Policy is based on the idea of “Responsible Innovation”, which emphasizes the importance of promoting innovation while also encouraging accountability and ethically-aligned utilization of AI.
“The AI system’s use aligns completely with the legal obligations of the IDF – under both Israeli and international law.”
( MAG Corps (Hebrew), February 29, 2024)
According to the Association of Civil Rights in Israel, the IDF’s response to the +972 article on using AI machines in war is that it presents information in a biased and misleading manner. They mentioned that the Gospel & Lavender AI system provides investigators with information in a way that facilitates easy collection and understanding of the underlying data. It was noted that for each recommendation from the “Gospel” system, a separate human review of the intelligence material is carried out. Subsequently, the investigator decides whether to approve the target, and this decision is then independently reviewed and approved by at least one other party. Finally, it should be clarified that utilizing the system does not alter the operational process for planning the method of attacking military targets and approving the actual attack. This process includes additional operational elements and is governed by binding commands. These regulations require that every attack adheres to the principle of proportionality and the duty to take all feasible measures under the circumstances to reduce harm to non-combatant civilians and civilian property.
Human rights and democracy in the use of AI
International human rights encompass a body of international laws, including the Universal Declaration of Human Rights, and regional human rights systems like that of the Council of Europe and its European Convention on Human Rights. These rights establish universal minimum standards rooted in values of human dignity, autonomy, and equality, in accordance with the rule of law. Human rights form the foundation for other regulations relevant to AI, such as data protection, and are essential to any democratic system. Within this overarching framework, binding regulations like product safety, and non-binding regulations like ethics, contribute to the responsible development of AI.
AI systems and technologies promise to enhance the protection and realization of human rights by making personalized education and medical diagnosis and treatment services more widely available and accessible. However, AI technologies can also pose challenges to human rights, democracy, and the rule of law, potentially leading to the accidental or deliberate violation of human rights. Policymakers, technologists, and all stakeholders must ensure that AI systems are designed in a manner that upholds the rule of law, human rights, democratic values, and diversity. The Council of Europe is conducting a feasibility study and identifying elements of one or more legal instruments, which may be binding or non-binding, to support the design, development, and implementation of AI systems that are in line with human rights, democracy, and the rule of law. This effort involves an ad hoc committee (CAHAI) and all sectors of the Council of Europe working together in a coordinated manner on a general legal instrument and specialized instruments.
As the use of artificial intelligence continues to expand, there is an increasing need for laws and regulations to prevent the harmful misuse of this technology. AI has the potential to greatly benefit society, but it also presents significant risks if not properly controlled. It is crucial for legislators to establish clear guidelines and restrictions on the use of AI to ensure its responsible and ethical utilization. This may involve laws addressing issues such as privacy, bias, accountability, and transparency in AI systems. By enacting such laws, we can help mitigate the potential harm that AI may cause and ensure its use for the greater good of society. Finally, in the era of artificial intelligence, Israel will continue to make significant advancements in AI development. Despite encountering political obstacles and the gap between academia and AI research, Israel’s strength in this area remains strong and will receive global support and funding. The country will continue to utilize AI technology for economic expansion, defense purposes, and various business ventures. Israel’s focus on its internal AI initiatives and ethical concerns positions it well to maintain its status as the “Start-up Nation.”
Sources:
https://www.washingtonpost.com/world/2024/04/05/israel-idf-lavender-ai-militarytarget/
https://www.politico.com/news/2024/03/03/israel-ai-warfare-gaza-00144491
Israel’s Artificial Intelligence Regulation & ethics
“We are at the beginning of a new era for humanity, the era of artificial intelligence,”
( Benjamin Netanyahu, Prime Minister of Israel, June, 2023)
In June 2023, Prime Minister Benjamin Netanyahu announced his plans to develop a national strategy for the civil and security applications of artificial intelligence, following discussions with tech mogul Elon Musk and a visit to OpenAI CEO Sam Altman. Both Musk and Altman are prominent figures in the field, which has experienced significant advancements in recent years. The launch of the highly praised ChatGPT artificial intelligence chatbot earlier this year by OpenAI, a company co-founded by Musk, showcased the potential of new artificial intelligence tools and has sparked interest in the technology, along with concerns about the potential risks it may present. According to Netanyahu, Musk suggested that Israel has the potential to lead in artificial intelligence development, while also emphasizing the importance of recognizing the risks associated with the technology.
According to a video featuring Netanyahu on X, he mentioned that he and Musk had “discussed extensively two key points. Firstly, the importance for governments to grasp both the potential benefits and risks of artificial intelligence. Secondly, he shared his belief that Israel has the potential to emerge as a major player on this matter.”
With regards to private sector applications, Israel’s AI Policy is based on the idea of “Responsible Innovation”, which emphasizes the importance of promoting innovation while also encouraging accountability and ethically-aligned utilization of AI.
“The AI system’s use aligns completely with the legal obligations of the IDF – under both Israeli and international law.”
( MAG Corps (Hebrew), February 29, 2024)
According to the Association of Civil Rights in Israel, the IDF’s response to the +972 article on using AI machines in war is that it presents information in a biased and misleading manner. They mentioned that the Gospel & Lavender AI system provides investigators with information in a way that facilitates easy collection and understanding of the underlying data. It was noted that for each recommendation from the “Gospel” system, a separate human review of the intelligence material is carried out. Subsequently, the investigator decides whether to approve the target, and this decision is then independently reviewed and approved by at least one other party. Finally, it should be clarified that utilizing the system does not alter the operational process for planning the method of attacking military targets and approving the actual attack. This process includes additional operational elements and is governed by binding commands. These regulations require that every attack adheres to the principle of proportionality and the duty to take all feasible measures under the circumstances to reduce harm to non-combatant civilians and civilian property.