Does Microsoft support Israel? Human rights concerns over AI

Editorial Team
Credit: AP Photo/Michel Euler

Recent revelations about Microsoft delivering advanced AI and cloud computing technologies to the Israeli defense forces have brought the company’s involvement with the Israeli military, especially during the war in Gaza, into the spotlight. This expanding collaboration has attracted an intricate debate about corporate responsibility, human rights, and whether AI should or should not be used in warfare. Also, Microsoft provided the Israeli army with access to GPT-4 (created by OpenAI), which provided more advanced AI capabilities that supposedly helped with planning operations and intelligence analysis. This access demonstrated Microsoft’s distinct role in connecting commercial AI technology with military applications, even in the face of OpenAI’s recent policy change prohibiting cooperation with military and intelligence clients.

History of Microsoft’s support for Israel

Following the October 7, 2023, Hamas attack, Microsoft got in close contact with the Israeli Ministry of Defense (IMOD), largely based on leaked records and investigative reports. The Israeli Defense Forces (IDF) needed to contend with a sudden increase in the demand for processing capabilities and data storage capacity to process both the ocean of operational and intelligence data generated in the course of the battle. In response, Microsoft offered improved Azure cloud services, artificial intelligence technologies, and expert support, reportedly totaling thousands of hours of technical support and at least $10 million in contracts.

Microsoft’s services included vital military and intelligence activities in addition to administrative tasks like email and file management. The IDF processed and analyzed intelligence data obtained through mass surveillance, which included listening in on enemy movements and intercepting communications, using Microsoft’s Azure platform. The military’s operational capabilities were improved by cross-referencing this data with Israel’s proprietary AI-enabled targeting systems.

Allegations and concerns

Human rights organizations, campaigners, and some Microsoft workers are quite concerned by the discoveries. According to investigations, most notably by the Associated Press, targeting decisions in Gaza and Lebanon were automated using AI technologies and cloud services from Microsoft and OpenAI. The opponents argue that such automated processes resulted in erroneous attacks that caused the death of civilians, a problem that casts a shadow of possible war crimes on Big Tech.

Regardless of its advantages, its critics believe that AI-based systems may fail due to poor data or algorithm-based flaws, leading to ineffective targeting and deaths of innocent civilians. The issue of accountability and the safety of civilians when applying AI to combat is also surrounded by a severe ethical question when it comes to heavily populated areas such as Gaza.

The Boycott, Divestment, Sanctions (BDS) movement and its supporters have accused Microsoft of aiding and abetting Israeli apartheid and genocide by knowingly supplying the means of serious human rights violations through its voluntary misrepresentation of technological ways by which serious human rights violations occur. They claim that Microsoft’s cloud and AI services used to monitor, target, and batter Palestinians are equally critical to the military activity of Israel as physical weapons and concrete walls.

Microst’s response

Microsoft publicly admitted to providing cloud and AI services to the Israeli military during the Gaza crisis in response to growing demand. According to the company, it collaborates with the Israeli government to safeguard Israeli cyberspace against external threats, offering the IMOD software, professional services, Azure cloud, and AI services, including language translation. Microsoft stressed that its terms of service, Acceptable Use Policy, and AI Code of Conduct govern its relationship with the IMOD, which is a typical commercial one. These guidelines expressly forbid using Microsoft technology to cause harm or break the law and mandate that users adopt appropriate AI practices, such as human oversight and access limits.

In response to media reports and staff concerns, Microsoft hired an outside agency to look into the claims and carried out an internal inquiry. According to the business, there is currently no proof that its AI and Azure technologies have been used to hurt or target individuals in Gaza. Citing secrecy, Microsoft has refrained from making precise disclosures regarding the nature of its military contracts or the applications of its technologies.

Employee and public backlash

There is a great deal of discontent within Microsoft as a result of the revelations. The work of the company in military engagements that led to civilian deaths has cost the company a punch to the stomach and a moral outcry by employees, in particular, the AI group. Some have participated in or supported campaigns such as the No Azure for Apartheid, which are urging Microsoft to halt technology support behind Israeli military action that is believed to be infringing on human rights.

Civil society organizations and activists have called for more accountability and transparency and convinced Microsoft to cease the sale of dual-use and weaponized technologies. They argue that there is a legal and reputational risk to which the business is exposed directly, being strongly bankable even under criminal international law as well, in case it fails to prevent the application of its technology in such a manner that is highly dangerous to civilians.

Broader implications for tech and warfare

The story of Microsoft serves as an example of how private technology businesses are becoming increasingly involved in contemporary conflict. The ambiguity of the borders between military and civilian applications of cloud computing and artificial intelligence leads to problematic questions regarding what tech companies must do ethically. The possibilities of exploitation and unexpected consequences grow along with the involvement of AI in intelligence gathering, decision-making, and targeting.

The situation also shows how challenging regulating AI in areas of conflict can be, as laws and regulatory processes can fall behind regarding the adoption rate of new technology. As demonstrated in Gaza, the use of commercial AI models in active combat sets a precedent for how technology may influence conflict in the future that has worldwide implications.

How does Microsoft ensure its AI services are not used for harm?

Microsoft uses a multi-layered, all-encompassing strategy that combines governance guidelines, ongoing monitoring, and technical controls to make sure its AI services are not misused.

  • Microsoft’s internal AI Red Team is a specialized, multidisciplinary team that simulates adversarial situations and real-world attacks to thoroughly test AI models. Due to deployment, our team looks for vulnerabilities, including bias, ethical hazards, quick injection attacks, and possible misuse, to make sure AI systems operate safely and dependably.
  • Code of Conduct for Enterprise AI Services: Microsoft mandates that all users of its AI services abide by a stringent Code of Conduct that forbids the use of AI to commit crimes or cause harm.
  • Responsible AI Principles: Accountability, openness, Privacy, fairness, and dependability are the key values and principles on which the AI development at Microsoft is based. These values underpin all phases of the AI lifecycle, including development and implementation, and are enforced with the help of such instruments as the Responsible AI Dashboard to monitor the behavior of AI systems and governance models such as the Responsible AI Standard.
  • Microsoft uses a layered approach to risks of harm and misuse compared to the model, API service, and application layers of the AI technology stack. This is an ongoing process as AI products evolve, in which collaboration with partners such as OpenAI and continuous investment in ensuring the safety measures improve.
  • Data security and privacy: Privacy is considered one of the fundamental human rights. In its AI products, Microsoft complies with privacy and security by design and ensures safe user data access, as well as strong authentication and data encryption to prevent unlawful access.
  • Combating Abusive AI Content: On the offensive, Microsoft is using pre-determinant classifiers, proactive content filtering, automated testing, and rapid removal of abusive users aimed at preventing the abuse of its generative AI tools. To verify AI-generated material and prevent deepfakes, it also supports watermarking and media provenance technologies.

Final words: Does Microsoft support Israel?

The revelation of the deepened links between Microsoft and the Israeli military during the Gaza war has put in situation in a situation that keenly interlaces geopolitics, morals, and technology. Although Microsoft is adamant that it is complying with the standards of responsible AI, with no sign of misuse found, the concerns raised by investigations, employees, and human rights organizations require special attention. This episode urges tech companies to be more open about their military contracts and the practical effects of their AI systems. To ensure that technology improvements do not come at the expense of human rights and civilian lives, it also emphasizes the urgent need for international rules and accountability mechanisms to control the use of AI in conflict.

About Us

Brussels Morning is a daily online newspaper based in Belgium. BM publishes unique and independent coverage on international and European affairs. With a Europe-wide perspective, BM covers policies and politics of the EU, significant Member State developments, and looks at the international agenda with a European perspective.
Share This Article
The Brussels Morning Newspaper Logo

Subscribe for Latest Updates