Harriet Hunter, Author at Legal Cheek https://www.legalcheek.com Legal news, insider insight and careers advice Tue, 02 Jul 2024 07:45:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg Harriet Hunter, Author at Legal Cheek https://www.legalcheek.com 32 32 Warfare technology: can the law really referee? https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/ https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/#comments Tue, 02 Jul 2024 07:45:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=206395 Harriet Hunter, law student at UCLan, explores AI's impact on weaponry and international humanitarian law

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>

Harriet Hunter, law student at the University of Central Lancashire, explores the implications of AI in the development of weaponry and its effect on armed conflict in international humanitarian law


Artificial Intelligence (AI) is arguably the most rapidly emerging form of technology in modern society. Almost every sector and societal process has been or will be influenced by artificially intelligent technologies and the military is no exception. AI has firmly earned its place as one of the most sought-after technologies available for countries to utilise in armed conflict, with many pushing to test the limits of autonomous weapons. The mainstream media has circulated many news articles on ‘killer robots, and the potential risks to humanity — however the reality of the impact of AI on the use of military-grade weaponry is not so transparent.

International humanitarian law (IHL) has been watching from the sidelines since the use of antipersonnel autonomous mines back in the 1940s, closely monitoring each country’s advances in technology and responding to the aftereffects of usage.

IHL exists to protect civilians not involved directly in conflict, and to restrict and control aspects of warfare. However, autonomous weapons systems are developing faster than the law  — and many legal critics are concerned that humanity might suffer at the hands of a few. But, in a politically bound marketplace, is there any place for such laws, and if they were to be implemented, what would they look like, and who would be held accountable?

Autonomous weapons and AI – a killer combination?

Autonomous weapons have been a forefront in military technology since the 1900’s – playing a large part in major conflicts such as the Gulf War. Most notably, the first usage of autonomous weapons was in the form of anti-personnel autonomous mines. Anti-personnel autonomous mines are set off by sensors – with no operator involvement in who is killed;  inevitably causing significant loss of civilian life. This led to anti-personnel autonomous mines being banned under the Ottawa treaty 1997. However, autonomous weapon usage had only just begun.

In the 1970’s autonomous submarines were developed and used by the US navy, a technology which was subsequently sold to multiple other technologically advanced countries. Since the deployment of more advanced AI, the level of weapons that countries have been able to develop has led to a new term being coined: ‘LAWS’. Lethal Autonomous Weapons Systems (LAWS)  are weapons which use advanced AI technologies to identify targets and deploy with little to no human involvement.

LAWS are, in academic research, split into three ‘levels of autonomy’ – each characterised by the amount of operator involvement that is required in their deployment. The first level is ‘supervised autonomous weapons’ otherwise known as ‘human on the loop’ — these weapons allow human intervention to terminate engagement. The second level is ‘semi-autonomous weapons’ or ‘human in the loop’, weapons that once engaged will enact pre-set targets. The third level is ‘fully autonomous weapons’ or ‘human out of the loop’, where weapons systems have no operator involvement whatsoever.

LAWS rely on advances in AI to become more accurate. Currently, there are multiple LAWS either in use or in development, including:

  • The Uran 9 Tank, developed by Russia, which can identify targets and deploy without any operator involvement.
  • The Taranis unmanned combat air vehicle being developed in the UK by BAE Systems, an unmanned jet which uses AI programmes to attack and destroy large areas of land with very minimal programming

The deployment of AI within the military has been far reaching. However, like these autonomous weapons, artificial intelligence is increasingly complex, and its application within military technologies is no different. Certain aspects of AI have been utilised more than others. For example, facial recognition can be used on a large scale to identify targets within a crowd. Alongside that, certain weapons have technologies that can calculate the chances of hitting a target, and of hitting a target the second time by tracking movements — which has been utilised in drone usage especially to track targets when they are moving from building to building.

International humanitarian law — the silent bystander?

IHL is the body of law which applies during an armed conflict. It has a high extra-territorial extent and aims to protect those not involved in the practice of conflict, as well as to restrict warfare and military tactics. IHL has four basic tenets; ensuring the distinction between civilian and military, proportionality (ensuring that any military advances are balanced between civilian life and military gain), ensuring precautions in attack are followed, and the principle of ‘humanity’. IHL closely monitors the progress of the weapons that countries are beginning to use and develop, and are (in theory) considering how the use of these weapons fits within their principles. However, currently the law surrounding LAWS is vague. With the rise of LAWS, IHL is having to adapt and tighten restrictions surrounding certain systems.

Want to write for the Legal Cheek Journal?

Find out more

One of its main concerns surrounds the rules of distinction. It has been argued that weapons which are semi, or fully autonomous (human in the loop, and out of the loop systems) are unable to distinguish between civilian and military bodies. This would mean that innocent lives could be taken at the mistake of an autonomous system. As mentioned previously, autonomous weapons are not a new concept, and subsequent to the use of antipersonnel autonomous mines in the 1900s,  they were restricted due to the fact that there was no distinction between civilians ‘stepping onto the mines’, and military personnel ‘stepping onto the mines. IHL used the rule of distinction to propose a ban which was signed by 128 nations in the Ottawa Treaty 1997.

The Marten’s clause, a clause of the Geneva Convention, aims to control the ‘anything not explicitly regulated is unregulated’ concept. IHL is required to control the development, and to a certain extent pre-empt the development of weapons which directly violate certain aspects of law. An example of this would be the banning of ‘laser blinding’ autonomous weapons in 1990 — this was due to the ‘laser blinding’ being seen as a form of torture which directly violates a protected human right; the right to not be tortured.  At the time, ‘laser blinding’ weapons were not in use in armed conflict, however issues surrounding the ethical implications of these weapons on prisoners of war was a concern to IHL.

But is there a fair, legal solution?

Unfortunately, the chances are slim. More economically developed countries can purchase and navigate the political waters of the lethal autonomous weapons systems market — whilst less economically developed countries are unable to purchase these technologies.

An international ban on all LAWSs has been called for, with legal critics stating that IHL is unable to fulfil its aims to the highest standard by allowing the existence, development and usage of LAWS. It is argued that the main issue which intertwines AI, LAWS and IHL, is the question – should machines be trusted to make life or death decisions?

Even with advanced facial recognition technology — critics are calling for a ban, as no technology is without its flaws — therefore how can we assume systems such as facial recognition are fully accurate? The use of fully autonomous (human out of the loop) weapons, where a human cannot at any point override the technology – means that civilians are at risk. It is argued that this completely breaches the principles of IHL.

Some legal scholars have argued that the usage of LAWS should be down to social policy — a ‘pre-emptive governing’ of countries who use LAWS. This proposed system allows and assists IHL in regulation of weapons at the development stage – which, it is argued, is ‘critical’ to avoiding a ‘fallout of LAWS’ and preventing humanitarian crisis. This policy would hold developers to account prior to any warfare. However, it could be argued that this is out of the jurisdiction of IHL which is only applied once conflict has begun — this leads to the larger debate of what the jurisdiction of IHL is, in comparison to what it should be.

Perhaps IHL is prolonging the implementation of potentially life-saving laws due to powerful countries asserting their influence in decision making; these powerful countries have the influence to block changing in international law where the ‘best interests’ of humanity do not align with their own military advances.

Such countries, like the UK, are taking a ‘pro-innovation’ approach to AI in weaponry. This means that they are generally opposed to restrictions which could halt progress in the making. However, it has been rightly noted that these ‘advanced technologies’ under the control of terrorist organisations (who would not be bound to follow IHL) would have disastrous consequences. They argue that a complete ban on LAWS could lead to more violence than without.

Ultimately…

AI is advancing, and with this, autonomous weapons systems are too. Weapons are becoming more advantageous to the military – with technology becoming more accurate and more precise. International humanitarian law, continually influenced by political stances and economic benefit to countries, is slowly attempting to build and structure horizontal legislation. However, the pace at which law and technology are both developing is not comparative and concerns many legal critics. The question remains, is the law attempting to slow an inevitable victory?

Harriet Hunter is a first year LLB (Hons) student at the University of Central Lancashire, who has a keen interest in criminal law, and laws surrounding technology; particularly AI.

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/feed/ 1