Technology Archives - Legal Cheek https://www.legalcheek.com/topic_area/technology/ Legal news, insider insight and careers advice Tue, 02 Jul 2024 07:45:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg Technology Archives - Legal Cheek https://www.legalcheek.com/topic_area/technology/ 32 32 Warfare technology: can the law really referee? https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/ https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/#comments Tue, 02 Jul 2024 07:45:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=206395 Harriet Hunter, law student at UCLan, explores AI's impact on weaponry and international humanitarian law

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>

Harriet Hunter, law student at the University of Central Lancashire, explores the implications of AI in the development of weaponry and its effect on armed conflict in international humanitarian law


Artificial Intelligence (AI) is arguably the most rapidly emerging form of technology in modern society. Almost every sector and societal process has been or will be influenced by artificially intelligent technologies and the military is no exception. AI has firmly earned its place as one of the most sought-after technologies available for countries to utilise in armed conflict, with many pushing to test the limits of autonomous weapons. The mainstream media has circulated many news articles on ‘killer robots, and the potential risks to humanity — however the reality of the impact of AI on the use of military-grade weaponry is not so transparent.

International humanitarian law (IHL) has been watching from the sidelines since the use of antipersonnel autonomous mines back in the 1940s, closely monitoring each country’s advances in technology and responding to the aftereffects of usage.

IHL exists to protect civilians not involved directly in conflict, and to restrict and control aspects of warfare. However, autonomous weapons systems are developing faster than the law  — and many legal critics are concerned that humanity might suffer at the hands of a few. But, in a politically bound marketplace, is there any place for such laws, and if they were to be implemented, what would they look like, and who would be held accountable?

Autonomous weapons and AI – a killer combination?

Autonomous weapons have been a forefront in military technology since the 1900’s – playing a large part in major conflicts such as the Gulf War. Most notably, the first usage of autonomous weapons was in the form of anti-personnel autonomous mines. Anti-personnel autonomous mines are set off by sensors – with no operator involvement in who is killed;  inevitably causing significant loss of civilian life. This led to anti-personnel autonomous mines being banned under the Ottawa treaty 1997. However, autonomous weapon usage had only just begun.

In the 1970’s autonomous submarines were developed and used by the US navy, a technology which was subsequently sold to multiple other technologically advanced countries. Since the deployment of more advanced AI, the level of weapons that countries have been able to develop has led to a new term being coined: ‘LAWS’. Lethal Autonomous Weapons Systems (LAWS)  are weapons which use advanced AI technologies to identify targets and deploy with little to no human involvement.

LAWS are, in academic research, split into three ‘levels of autonomy’ – each characterised by the amount of operator involvement that is required in their deployment. The first level is ‘supervised autonomous weapons’ otherwise known as ‘human on the loop’ — these weapons allow human intervention to terminate engagement. The second level is ‘semi-autonomous weapons’ or ‘human in the loop’, weapons that once engaged will enact pre-set targets. The third level is ‘fully autonomous weapons’ or ‘human out of the loop’, where weapons systems have no operator involvement whatsoever.

LAWS rely on advances in AI to become more accurate. Currently, there are multiple LAWS either in use or in development, including:

  • The Uran 9 Tank, developed by Russia, which can identify targets and deploy without any operator involvement.
  • The Taranis unmanned combat air vehicle being developed in the UK by BAE Systems, an unmanned jet which uses AI programmes to attack and destroy large areas of land with very minimal programming

The deployment of AI within the military has been far reaching. However, like these autonomous weapons, artificial intelligence is increasingly complex, and its application within military technologies is no different. Certain aspects of AI have been utilised more than others. For example, facial recognition can be used on a large scale to identify targets within a crowd. Alongside that, certain weapons have technologies that can calculate the chances of hitting a target, and of hitting a target the second time by tracking movements — which has been utilised in drone usage especially to track targets when they are moving from building to building.

International humanitarian law — the silent bystander?

IHL is the body of law which applies during an armed conflict. It has a high extra-territorial extent and aims to protect those not involved in the practice of conflict, as well as to restrict warfare and military tactics. IHL has four basic tenets; ensuring the distinction between civilian and military, proportionality (ensuring that any military advances are balanced between civilian life and military gain), ensuring precautions in attack are followed, and the principle of ‘humanity’. IHL closely monitors the progress of the weapons that countries are beginning to use and develop, and are (in theory) considering how the use of these weapons fits within their principles. However, currently the law surrounding LAWS is vague. With the rise of LAWS, IHL is having to adapt and tighten restrictions surrounding certain systems.

Want to write for the Legal Cheek Journal?

Find out more

One of its main concerns surrounds the rules of distinction. It has been argued that weapons which are semi, or fully autonomous (human in the loop, and out of the loop systems) are unable to distinguish between civilian and military bodies. This would mean that innocent lives could be taken at the mistake of an autonomous system. As mentioned previously, autonomous weapons are not a new concept, and subsequent to the use of antipersonnel autonomous mines in the 1900s,  they were restricted due to the fact that there was no distinction between civilians ‘stepping onto the mines’, and military personnel ‘stepping onto the mines. IHL used the rule of distinction to propose a ban which was signed by 128 nations in the Ottawa Treaty 1997.

The Marten’s clause, a clause of the Geneva Convention, aims to control the ‘anything not explicitly regulated is unregulated’ concept. IHL is required to control the development, and to a certain extent pre-empt the development of weapons which directly violate certain aspects of law. An example of this would be the banning of ‘laser blinding’ autonomous weapons in 1990 — this was due to the ‘laser blinding’ being seen as a form of torture which directly violates a protected human right; the right to not be tortured.  At the time, ‘laser blinding’ weapons were not in use in armed conflict, however issues surrounding the ethical implications of these weapons on prisoners of war was a concern to IHL.

But is there a fair, legal solution?

Unfortunately, the chances are slim. More economically developed countries can purchase and navigate the political waters of the lethal autonomous weapons systems market — whilst less economically developed countries are unable to purchase these technologies.

An international ban on all LAWSs has been called for, with legal critics stating that IHL is unable to fulfil its aims to the highest standard by allowing the existence, development and usage of LAWS. It is argued that the main issue which intertwines AI, LAWS and IHL, is the question – should machines be trusted to make life or death decisions?

Even with advanced facial recognition technology — critics are calling for a ban, as no technology is without its flaws — therefore how can we assume systems such as facial recognition are fully accurate? The use of fully autonomous (human out of the loop) weapons, where a human cannot at any point override the technology – means that civilians are at risk. It is argued that this completely breaches the principles of IHL.

Some legal scholars have argued that the usage of LAWS should be down to social policy — a ‘pre-emptive governing’ of countries who use LAWS. This proposed system allows and assists IHL in regulation of weapons at the development stage – which, it is argued, is ‘critical’ to avoiding a ‘fallout of LAWS’ and preventing humanitarian crisis. This policy would hold developers to account prior to any warfare. However, it could be argued that this is out of the jurisdiction of IHL which is only applied once conflict has begun — this leads to the larger debate of what the jurisdiction of IHL is, in comparison to what it should be.

Perhaps IHL is prolonging the implementation of potentially life-saving laws due to powerful countries asserting their influence in decision making; these powerful countries have the influence to block changing in international law where the ‘best interests’ of humanity do not align with their own military advances.

Such countries, like the UK, are taking a ‘pro-innovation’ approach to AI in weaponry. This means that they are generally opposed to restrictions which could halt progress in the making. However, it has been rightly noted that these ‘advanced technologies’ under the control of terrorist organisations (who would not be bound to follow IHL) would have disastrous consequences. They argue that a complete ban on LAWS could lead to more violence than without.

Ultimately…

AI is advancing, and with this, autonomous weapons systems are too. Weapons are becoming more advantageous to the military – with technology becoming more accurate and more precise. International humanitarian law, continually influenced by political stances and economic benefit to countries, is slowly attempting to build and structure horizontal legislation. However, the pace at which law and technology are both developing is not comparative and concerns many legal critics. The question remains, is the law attempting to slow an inevitable victory?

Harriet Hunter is a first year LLB (Hons) student at the University of Central Lancashire, who has a keen interest in criminal law, and laws surrounding technology; particularly AI.

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/feed/ 1
Contracts on Monday, machine learning on Tuesday: The future of the LLB https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/ https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/#respond Tue, 07 May 2024 07:52:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=204490 Université Toulouse Capitole LLM student Sean Doig examines technology's impact on legal education and training

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>

Université Toulouse Capitole LLM student Sean Doig examines technology’s impact on legal education and training


No profession is immune to the intrusion of disruptive technologies. Inevitably, the legal profession is no exception, and the practice of law and the administration of justice has grown incredibly reliant on technology.

The integration of new legal technologies into legal services is driven by the incentive to provide more efficient, cost effective, and accessible services to its clients. Indeed, modern lawyers are implementing paperless offices and “cloud-based practice-management systems, starting up virtual law practices, and fending off challenges from document preparation services like Legal Zoom.”

Such profound change has even shaped new specialisms within the legal profession, including those known as ‘legal technologists’; a group of skilled individuals who can “bridge the gap between law and technology.” While the name suggests connotations of a ‘legally-minded coder’, the reality is that the majority of professional legal technologists lack any training or experience in both the practice of law and in the profession of engineering and technology management.

Legal technologists is a lucrative and growing niche, and it is insufficient for those professionals to lack the experience and knowledge in the practice of law if they are to develop sustainable legal technologies to assist the delivery of services to clients.

Indeed, disruptive technologies are constantly evolving, and with the rapid advancement of Artificial Intelligence (‘AI’) and the Metaverse, there is a need for immediate change as to the training of the next generations of legal minds. While this sort of fearmongering around obsolete skills and doomed professions is relatively commonplace among CEOs of AI companies, the need for upskilling and adaptability of lawyers has been reiterated by skeptical academics and legal professionals for years.

As early as the 1950’s, diction machines and typewriters changed the working practices of lawyers and legal secretaries. In the 1970’s, law firms began using computers and LexisNexis, an online information service, which changed the way legal teams performed research to prepare their cases. One of the more well-known ‘doomsayers’, Richard Susskind, whose book boldy — although perhaps rather prematurely – titled The End of Lawyers was published in 2008 — well before the era of ‘Suits’!

Despite Susskind’s earlier predictions of impending doom of the end of lawyers, the author’s subsequent book, Tommorrow’s Lawyers, surpasses the ordinary opinion that technology will remove jobs; instead, opts that technology will assist the work of professionals and more jobs will involve applying technological solutions to produce a cost-efficient outcome. Although technology is developing rapidly to assist professionals, Susskind identifies that there is a lack of enthusiasm among law firms to evolve their traditional practices. Conversely, the enthusiasm of law firms to incorporate technology is normally where AI or other technologies are able to boost profits and lower operating costs, rather than assisting the lawyer and delivering for the client.

The incentive for law firms to incorporate technology into their working practices is purely economical and fear oriented. Firms that do not incorporate technology will lose clients to those competitors that have efficient technological means at their disposal. There is little credible advice as to how firms can affectively alter their business model to integrate technology. After all, the billable hour is the crux of a law firm, and with AI speeding up historically slow and tedious work, its value is diminishing.

Without dwelling too much on the fundamentals of capitalism and its effectiveness as an economic system, it is important to note that technology companies — such as OpenAI and Meta – are mostly funded and motivated by shareholders. The rapid nature in the development of technology is to produce results and dividends for those shareholders. In order for the product to perform well economically, there is a rush to outdo competitors and to be disruptive in the market. If successful, the value of the company will increase, the value of the shares will increase, and the more equity the company will have to continue to grow.

This means that technology is advancing at a fast rate and is outpacing the technical skills of professionals. The cost of new technologies factors in the markup that tech companies seek to satisfy their shareholders and advance their research & development (R&D). As Susskind notes, the durability of small law firms will be put into question in the 2020’s against the rise of major commercial law firms that are able to afford to invest in competitive, new technologies.

What does this mean for law students? New skills are required to enter the new technological workforce, and those graduates that meet the skillset will be more in demand than the rest of their cohort. As a result, legal education must equally evolve to adequately prepare law students for working in technological law firms. As Susskind highlights: “law is taught as it was in the 1970’s by professors who have little insight into or interest in the changing legal marketplace”, and graduates are ill-prepared for the technological legal work that their employer is expecting from them.

Want to write for the Legal Cheek Journal?

Find out more

It should be noted that some graduate and post-graduate courses do exist to facilitate the teaching of some of the technological skills to prepare individuals for the new workplace. Indeed, for example, there is a simulation currently in use in a postgraduate professional course called the Diploma in Legal Practice at the Glasgow Graduate School of Law. Nevertheless, the idea here is that the burden should be placed on law schools and that technological skills should be taught at the earliest stage in order to best prepare graduates for the workplace of tomorrow.

Although it is argued that the original purpose of the LLB is to teach black letter law and the skills for legal practice should be left for post-graduate legal training, this neglects those law students who do not wish to pursue the traditional post-graduate legal education; rather opting for an alternative career path in law.

In order for the value of an LLB to be upheld, it must adapt to meet the growing demand of the industry it serves. Its sanctity and popularity rests on its ability to be of use to any student seeking to have the best possible skills and, therefore, prospects in the job market. If the LLB is to survive, itself must compete with more attractive courses such as ‘Computer Science’, ‘Data Analysis’, and ‘Engineering’. It is not enough for law professors to continue to falsely assume that “students already get it”, or that if graduates work for a law firm then critical technology choices have been determined, “including case management software, research databases, website design, and policies on client communication.”

Furthermore, firms are “increasingly unwilling to provide training to incoming associates” and seek those graduates who already possess background knowledge. Undoubtedly, technology skills will elevate students’ employability, and those with tech skills will be in high demand by traditional law firms and by tech companies that service the legal industry.

While some law schools have been introducing “Legal Technology” or “Law and Technology” modules into their curriculums, it can be argued that they are insufficient to cover the array of specific skills that need to be taught, and are rather focusing merely on the impact of technology in the legal sector. The lack of innovation in law schools is placed on the lack of imagination on the part of law professors and its institutions; fearful of experimenting with the status quo of syllabises. Institutions with the courage to experiment with their curriculum to teach desirable skills in the legal market will attract and better serve a greater number of students for the new world of work.

Perhaps the most elaborate attempt to revolutionise legal education is the theoretical establishment of an MIT School of Law by author Daniel Katz. ‘MIT Law’ would be an institution that delivered a polytechnic legal education; focusing on “the intersection of substantive law, process engineering, computer science and artificial intelligence, design thinking, analytics, and entrepreneurship.” The institution would produce a new kind of lawyer; one that possessed the necessary skills to thrive in legal practice in the 21st century. With science, technology, engineering, and mathematics (“STEM”) jobs dominating the job market, there is an overlap into the legal market; giving rise to a prerequisite or functional necessity for lawyers to have technical expertise to solve traditional legal problems that are interwoven with developments in science and technology.

This hypothetical law school may seem far-fetched, but the underlining principle should be adapted to the modern LLB. Indeed, the curriculum should choose its courses upon the evaluation of the future market for legal services and adapt to the disruptive technologies becoming commonplace in the workplace. A hybrid of traditional law courses such as contract law, with more technical courses such as Machine Learning or E-Discovery should become the new normal to ensure the effective delivery of the best LLB of the future. Each course would be carefully evaluated in light of the current and future legal labour market to ensure that students are given the best possible chances after leaving the institution; whether they go on to post-graduate legal studies or not.

Sean Doig is an LLM student at Université Toulouse Capitole specialising in International Economic Law. He is currently working on his master’s thesis, and displays a particular interest in international law, technology and dispute resolution.

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/feed/ 0
5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/ https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/#respond Mon, 05 Feb 2024 08:57:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=199715 Mayank Batavia takes a deep dive into data protection mechanisms in India and Europe

The post 5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR appeared first on Legal Cheek.

]]>

Mayank Batavia takes a deep dive into data protection mechanisms in India and Europe


In August 2023, the Digital Personal Data Protection Bill, 2023 was passed by the two houses of the Indian Parliament. As a result, it has now become the Digital Personal Data Protection Act, 2023, making it legally enforceable.

Elsewhere, data privacy laws of varying complexity have been introduced in different countries over time. Among them, the European Union’s General Data Protection Regulation (GDPR), is considered both comprehensive and strict.

Before comparing India’s Digital Personal Data Protection Act, 2023 and the GDPR, let’s take a moment to understand why data privacy is both important and complex.

The complexity of data explosion

Less than a century ago, important data was printed on paper and stored in books and bound documents. You needed physical space, so if you wanted to store five books instead of one, you’d need five times the space.

Digital data storage changed everything.

Dropbox estimates that about 6.5 MN pages of documents can be stored in a 1TB hard disc, a storage device about one-and-half-times the size of your palm. By the same measure, even a standard smartphone can store over 25 movies in HD.

And because such data storage is easily available to everyone, from governments to organizations and institutions to individuals, it becomes very difficult for a legal body to regulate data protection, storage and sharing.

About GDPR

The European Union brought the GDPR into effect in May 2018. You are expected to comply with the GDPR if you store, process, transfer, or access data of residents of the member-states (27 EU countries and 3 EFTA countries).

It is a forerunner to many privacy regulations, including India’s DPDP and the CCPA (California Consumer Protection Act). The GDPR requirements are stringent and the cost of non-compliance is stiff. For such reasons, the GDPR has become a model for other countries.

About India’s DPDP

India’s Digital Personal Data Protection Act (DPDP) came into effect half a decade after the GDPR. This gave the DPDP the advantage of studying the GDPR and other regulations.

Two key terms

It will help to keep in mind what the below two terms mean for these two regulations:

Data Controller: The natural or legal person that decides why and how the personal data should be processed. The DPDP uses the term Data Fiduciary instead of Data Controller.

Data Processor: The natural or legal person that processes personal data on behalf of the Data Controller.

How is the India’s DPDP different from the GDPR

The EEA and India operate under very different social, political, historical, and even commercial parameters. So, it’s only natural that their privacy laws have some differences.

For example, Article 9 of the GDPR has set out clear categories of data that cannot be processed. Processing data with the objective, say, determining the political beliefs or sexual orientation or a person is expressly forbidden. The DPDP doesn’t lay out these terms.

Here are the key differences between the Digital Personal Data Protection Act and the GDPR.

1. The enshrined principles

GDPR: The GDPR takes a defined route to establishing what data privacy is and what its guiding principles are. The seven principles that lie behind the GDPR are lawfulness, fairness, and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.

DPDP: The Bill does not explicitly list out the principles like the GDPR. However, the report by the Justice B N Srikrishna Committee, appointed to examine the requirements, validity, scope, and other aspects of data protection, mentions two guiding factors that shaped the current law.

The first emerges from the Directive Principles of State Policy, which says that the state must act as a facilitator of human progress. Hence, the DPDP is drafted in a way to encourage the growth of the private enterprise without compromising an individual’s right to privacy.

Want to write for the Legal Cheek Journal?

Find out more

The second is a self-disciplinary idea for the state: it admits that the state is “prone to excess”. Therefore, it’s important to empower individuals with fundamental rights, which they may use against the state if the state begins to border on the excess. They may also be used if private enterprises attempt to abuse the freedom the state grants these enterprises.

The data protection framework has been built so that the right to privacy, now a fundamental right in India, gets legal endorsement. This framework offers protection to the individual against both state and non-state actors.

2. How the data is processed

GDPR: If a piece of data is a part of a filing system and is personal in nature, the GDPR will apply to it. Whether it has been processed mechanically, manually, or electronically is immaterial for the GDPR.

DPDP: Against that, the DPDP is very specific. It clearly states that the processing of the data needs to be “…wholly or partly automated operation…”.

There could be several reasons why the DPDP limits the definition of processing in this way. One explanation is that if the scope had included all sorts of processing, the law would have been too complex and mammoth to enforce, thereby defeating its purpose.

The Indian government is pushing for digitalization and alongside that, Indian consumers are also showing a clear change in the way they share their personal information. So, in the next five years or so, a large chunk of data is set to be digitized anyway.

3. Data Protection Boards and enforcement

As technology lets us collect an increasingly wider variety of data, what is personal data isn’t always easy to define. That adds another level of complexity in enforcement of data privacy regulations.

For instance, role email addresses (the ones like sales@, admin@, or billing@) are rarely used to sign up for newsletters, because they are team addresses. And they are often publicly displayed on websites. And yet, marketers indiscriminately spamming role addresses need to be kept in check.

The GDPR and the DPDP have built elaborate mechanisms to ensure that they protect the privacy of people without making things unduly difficult for businesses.

GDPR: The GDPR brought into existence the European Data Protection Board (EDPB). The EU member states have designated independent, supervisory public authorities. Each of these supervisory authorities is the point of contact for the data controller or processor within each member-state. However, it’s the EDPB that will ensure that the enforcement is consistent across the EU and beyond.

There are national DPAs (Data Protection Authorities) which work with national courts in order to enforce the GDPR. If there are more than one member states involved, the EDPB will step in. That makes the EDPB a single-window enforcement.

DPDP: The DPDP Act has proposed a board called the Data Protection Board of India (DPBI). (As of 27 November 2023, the DPBI has yet not been formed.) The DPBI will have a chairperson, board-members, and employees.

Among other things, the DPBI differs from the EDPB (of the EU) in that the former doesn’t hold powers to formulate any rules, while the latter does.

The DPBI receives complaints, reviews them to understand if the complaint is worthy of inquiry, and passes interim and final orders. It will work with other law enforcement agencies if required. That means it can cast a wide net, if required. Besides, appeals from the DPBI are passed on to the Telecom Disputes Settlement Authority of India (TDSAI), and appeals from the TDSAI may be taken by the Supreme Court.

4. Consent and responsibility

GDPR: The GDPR has a long list of lawful bases for processing data. That means the consent for data processing is granular and detailed. The GDPR requires that you display notice at the time of collecting the personal data.

The onus of compliance is on the data controllers as well as the data processors, depending upon the nature of compliance or breach.

DPDP: It appears that the contents of the DPDP notice are relatively limited – nature of data, purpose of processing that data, guidelines for grievance redressal and a few other things. Against that, the GDPR notice is much more detailed.

Unlike the GDPR, the DPDP holds the data fiduciary responsible even for the data processors they engage. That means that in case of a breach of compliance, the DPDP would hold the data fiduciary responsible.

There are two likely reasons why the DPDP made this stipulation, instead of allowing a joint-and-several form of liability. One, it was the data fiduciary that defined the purpose of collecting and processing data, and will likely remain the sole beneficiary of the processed data (The data processor typically offers a service to process the data, but is unlikely to gain anything beyond the processing fees). Hence, the onus must lie with the data fiduciary.

Two, because of this stipulation, the data fiduciary will make sure that all the security measures it has in place are proportionately reflected in the measures that processor takes. That will make sure that the data fiduciary remains alert as regards the standards of every entity in its supply chain.

5. Children’s data

While both the EU and India actively seek to protect their children, there are some divergences in how this is approached.

Culturally, people in India look at family – and not the individual – as a unit of the society. As a result, some western conventions of privacy don’t apply. For instance, many children aren’t assigned a separate room for themselves. Even when a child has a separate room for themselves, they seldom keep it locked, and members of the family freely move around in and out of rooms of one another.

The average Indian parent engages their children in a way that’s different from the way an average European or American parent will. The Indian parent is more hands-on and involved: they believe sharing important information within the family is key to bonding, well-being, and even overall safety.

With all this context, it’s not unusual to routinely share account passwords within the family. That blurs the lines of privacy in the familial context. In the European Union, this would be extremely rare.

Finally, the legislature and the judiciary in India take cognizance of the unique relationship between parents and their offspring (e.g. Maintenance and Welfare of Parents and Senior Citizens Act, 2007). All this, in a small way, might partially account for some of the differences between the GDPR and the DPDP.

GDPR: The Article 57 specifically requires the supervisory authorities of member nations to pay attention to “activities addressed specifically to children” while promoting public awareness and understanding.

The GDPR sets an age limit of 16 years for the definition of child. That means a person below 16 years of age would qualify as a child, so parental consent will come into picture for processing their data.

There is, however, an interesting exception mentioned in Recital 38. It clearly states that when providing “preventive or counseling services directly to a child”, the consent from the guardian or parent is not necessary.

DPDP: A person who has not attained the age of 18 years is defined as a child under the DPDP. Before processing the data of children, a verifiable consent from parents (or legal guardians) is required.

One thing that’s not entirely clear is why, for the purpose of consent, the DPDP has clubbed people with disabilities with children. Among other reasons, it may be due to the fact that both groups receive considerable support from parents.

Another interesting feature of the DPDP is that it clearly prohibits a Data Fiduciary from processing data that can “cause any detrimental effect on the well-being of a child”. The Data Fiduciary is also clearly prohibited from tracking or monitoring children or serving targeted advertising directed at children.

To some extent, it places a certain onus on the Data Fiduciary. That’s because today children are some of the most heavy users of social media and digital platforms. As a result, an organisation may already be digitally collecting their behavioural data and serving ads accordingly. In case of a dispute or disagreement, it could be difficult to draw the lines.

Concluding remarks

Both the DPDP and the GDPR reflect a considered, mature, and yet a strict approach in protecting the privacy and the data of their people.

And yet it’s important to remember that the two sets of regulations aim at two different geographies and two different bodies. While compliance with one will make compliance with the other easier, there are some provisions unique to each of the two.

In a world where data is shared, stored, and processed more widely than ever before, organizations can profitably leverage data while remaining compliant with regulations.

Mayank Batavia works in the tech industry within the email organisation space. 

The post 5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/feed/ 0
Why we need to take a closer look at ‘loot boxes’ https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/ https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/#respond Tue, 19 Sep 2023 07:47:30 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=193056 Aspiring barrister Georgia Keeton-Williams on why more needs to be done to protect children from in-game currencies

The post Why we need to take a closer look at ‘loot boxes’ appeared first on Legal Cheek.

]]>

Aspiring barrister Georgia Keeton-Williams delves into why more needs to be done to protect children from in-game currencies

The pandemic thrust video games into mainstream popularity. They provided an escape from furloughed boredom and gave a platform for people desperately attempting to connect with people outside their bubble. Ultrafast play and hyper-realistic graphics are worlds away from the OG video game pong, much like, as you will see, the current money-making strategies.

With uSwitch, a broadband comparison site, estimating that 91% of children aged 3-15 play video games, the issue comes when these revenue generating strategies reach the consoles of children.

The ‘loot boxes’ problem

One of these strategies is ‘loot boxes’. How loot boxes appear changes depending on the game, but they are often depicted as in-game chests (Coinmaster) or card pack look-alikes (FIFA). Sometimes, they are something completely outlandish such as Fortnite’s ‘loot llamas’ (a loot box shaped like a pinata llama). Simply put, loot boxes are just a container with a random item(s) which a player will receive. These can be bought for in-game or real-world money.

When opening a loot box, a player does not know what they will get. They could gain a heavily sought-after item with high value or a repeat item that is basically worthless. Some in-game items are even being sold on secondary markets for real-world money.  For example, the most expensive knife on SkinPort, a marketplace for in-game items, is over £44,000 (though the site recommends its actual worth to be circa £1,000)! Selling happens even though trading is usually against both the game’s and platform’s terms and conditions.

The legal response

Loot boxes sometimes use roulette-style wheels to show the player what they have ‘won’, often accompanied by flashing lights and sound effects. Importantly, roulette wheels are a ‘game of chance’ and are classed as gambling under s6 of the Gambling Act 2005. It was this similarity that led the Gambling Commission, the UK’s gambling regulator, to assess whether loot boxes could be a type of gambling.

The Commission found that most loot boxes could not be considered gambling under existing law. The problem was not the way the mechanism operated, but rather, the need for a qualifying prize under section 6(5). A prize needs to have ‘money’s worth’ or real-world value. The conclusion was that as there was usually no legitimate way to cash out loot box rewards, loot box prizes had no real-world worth and could not be considered gambling. They did explain further that if there were a legitimate way to cash out, those loot boxes would likely fall under the gambling label and the commission would take steps accordingly.

This meant that most loot boxes could continue to be sold to children.

The government has recently reviewed whether loot boxes should be classed as gambling. The call for evidence received 32,000 public responses, a rapid evidence assessment of available psychological studies and 50 direct submissions from sources such as academics and industry experts. The government’s response to this evidence can be accessed here.

Ultimately, the government’s views echoed those of the commission – most loot boxes had no monetary value. The report decided not to amend the law to bring loot boxes under the umbrella of gambling, stating that it would be premature given the lack of definitive information about their potential harms. It was argued that doing so may risk unintended consequences — children beginning to use adult accounts was one example — or unfairly hindering the video game industry’s freedom. Similarly to the gambling Commission, the report did recognise that where loot boxes could be cashed out legitimately they may be in breach of existing gambling law – but that they trust the gambling Commission to take action when this occurs.

Want to write for the Legal Cheek Journal?

Find out more

Digging deeper

It is true that the rapid evidence assessment found a lack of available research but InGAME, the report’s publisher, found that this gap meant that “a cautious approach to regulation of loot boxes is important.” However, the publisher went on to note that this “does not mean that nothing can or should be done”. The assessment actually advocates for enhanced protections and encourages ethical game design, where game developers prioritise safety within the design process. An example of this is age ratings in games or for loot boxes specifically.

Enhanced protections are particularly important when we consider loot boxes as a new product. As they are available to buy often with both in-game money or real-world money, many of the existing advertising restrictions on in-game purchases do not apply. So, if a player gets an unwanted reward, a game can display messages such as “you nearly had it!” when the outcome was purely chance dependent or “you’ll get it next time!” to promote a purchase when in reality, there is no guarantee.

This do-nothing approach has been confirmed in the latest gambling reform policy paper. This means that a change to the law is not imminent. Whether this approach is correct remains to be seen. It is, however, concerning that the UK has chosen to allow the sale of loot boxes to children when so many other countries are taking steps to restrict their sale. They are not completely harmless, as InGAME highlighted, and there are some studies starting to emerge that link loot box expenditure to problematic gambling.

Many people including the Children’s Commissioner, the House of Lords and the government’s Digital, Culture, Media and Sport Committee advocated for loot boxes not to be sold to children. In fact, the House of Lords Committee went as far as calling for the use of Section 6(6) of the Gambling Act 2005 to bring loot boxes under gambling law until a permanent solution can be found. While it shows the sentiment, this solution may be legally flawed. Section 6(6) allows the Secretary of State to classify something as a game of chance. As mentioned above, the issue with Loot Boxes is that most are unable to satisfy the prize element of the act, rather than the ‘game of chance’ element.

Until more research into the harms of loot boxes is conducted, we cannot know whether the government decision to leave loot boxes alone was correct. What is apparent is that there is huge potential for a disconnect between UK law and technological advancements, if the loot boxes issue is left unmonitored.

Georgia Keeton-Williams is an aspiring barrister and first-class law graduate from Northumbria University.

The post Why we need to take a closer look at ‘loot boxes’ appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/feed/ 0
Navigating bias in generative AI https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/ https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/#comments Mon, 11 Sep 2023 08:22:18 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=192724 Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

While the world lauds the latest developments in artificial intelligence (AI) and students celebrate never having to write an essay again without the aid of ChatGPT, beneath the surface, real concerns are developing around the use of generative AI. One of the biggest is the potential for bias. This specific concern was outlined by Nayeem Syed, senior legal director of technology at London Stock Exchange Group (LSEG), who succinctly warned, “unless consciously addressed, AI will mirror unconscious bias”.

 In terms of formal legislation, AI regulation differs greatly around the world. While the UK has adopted a ‘pro-innovation approach’, there still remain concerns around bias and misinformation.

Elsewhere, the recently approved  European Union Artificial Intelligence Act (EU AI Act) will be seen as the first regulation on artificial intelligence. This is expected to set the standard for international legislation around the world, similar to what occurred with the EU’s General Data Protection Regulation (GDPR). The AI Act incorporates principles that will help reduce bias, such as training data governance, human oversight and transparency.

In order to really understand the potential for bias in AI, we need to consider the origin of this bias. After all, how can an AI language model exhibit the same bias as humans? The answer is simple. Generative AI language models, such as OpenAI’s prominent ChatGPT chatbot, is only as bias-free as the data it is trained on.

Why should we care?

Broadly speaking, the process for training AI modes is straightforward. AI models learn from diverse text data collected from different sources. The text is split into smaller parts, and the model predicts what comes next based on what came before by learning from its own mistakes. While efforts are made to minimise bias, if the historical data that AI is learning from contains biases, say, systemic inequalities present in the legal system, then AI can inadvertently learn and reproduce these biases in its responses.

In the legal profession, the ramifications of these biases are particularly significant. There are numerous general biases AI may display related to ethnicity, gender and stereotyping, learned from historical texts and data sources. But in a legal context, imagine the potential damage of an AI system that generated its responses in a manner which unfairly favours certain demographics, thereby reinforcing existing inequalities.

One response to this argument is that, largely, no one is advocating for the use of AI to build entire arguments and generate precedent, at least not with generative AI as it exists in its current form. In fact, this has been shown to be comically ineffective.

So how serious a threat does the potential for bias actually pose in more realistic, conservative uses of generative AI in the legal profession? Aside from general research and document review tasks, two of the most commonly proposed, and currently implemented, uses for AI in law firms are client response chatbots and predictive analytics.

In an article for Forbes, Raquel Gomes, Founder & CEO of Stafi – a virtual assistant services company – discusses the many benefits of implementing automated chatbots in the legal industry. These include freeing up lawyers’ time, reducing costs and providing 24/7 instant client service on straightforward concerns or queries.

Likewise, predictive analytics can help a solicitor in building a negotiation or trial strategy. In the case of client service chatbots, the dangers resulting from biases in the training data is broadly limited to inadvertently providing clients with inaccurate or biased information. As far as predictive analysis is concerned, however, the potential ramifications are much wider and more complex.

Want to write for the Legal Cheek Journal?

Find out more

An example

Let’s consider a fictional case of an intellectual property lawyer representing a small start-up, who wants to use predictive analysis to help in her patent infringement dispute.

Eager for an edge, she turns to the latest AI revelation, feeding it an abundance of past cases. However, unknown to her, the AI had an affinity for favouring tech giants over smaller innovators as its learning had been shaped by biased data that leaned heavily towards established corporations, skewing its perspective and producing distorted predictions.

As a result, the solicitor believed her case to be weaker than it actually was. Consequently, this misconception about her case’s strength led her to adopt a more cautious approach in negotiations and accept a worse settlement. She hesitated to present certain arguments, undermining her ability to leverage her case’s merits effectively. The AI’s biased predictions thus unwittingly hindered her ability to fully advocate for her client.

Obviously, this is a vastly oversimplified portrayal of the potential dangers of AI bias in predictive analysis. However, it can be seen that even a more subtle bias could have severe consequences, especially in the context of criminal trials where the learning data could be skewed by historical demographic bias in the justice system.

The path forward

 It’s clear that AI is here to stay. So how do we mitigate these bias problems and improve its use? The first, and most obvious, answer is to improve the training data. This can help reduce one of the most common pitfalls of AI: overgeneralisation.

If an AI system is exposed to a skewed subset of legal cases during training, it might generalize conclusions that are not universally applicable, as was the case in the patent infringement example above. Two of the most commonly proposed strategies to reduce the impact of bias in AI responses are: increasing human oversight and improving the diversity of training data.

Increasing human oversight would allow lawyers to identify and rectify the bias before it could have an impact. However, easily the most championed benefit of AI is that it saves time. If countering bias effectively necessitates substantial human oversight, it reduces this benefit significantly.

The second most straightforward solution to AI bias is to improve the training data to ensure a comprehensive and unbiased dataset. This would, in the case of our patent dispute example, prevent the AI from giving skewed responses that leaned towards established corporations. However, acquiring a comprehensive and unbiased dataset is easier said than done, primarily due to issues related to incomplete data availability and inconsistencies in data quality.

Overall, while a combination of both these strategies would go a long way in mitigating bias it still remains one of the biggest challenges surrounding generative AI. It’s clear that incoming AI regulation will only increase and expand in an attempt to deal with a range of issues around the use of this rapidly rising technology. As the legal world increases its use of (and reliance on) generative AI, more questions and concerns will undoubtedly continue to appear over its risks and how to navigate them.

Charlie Downey is an aspiring solicitor. He is currently a third-year philosophy, politics and economics student at the University of Nottingham.

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/feed/ 2
Improving access to justice – is AI the answer? https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/ https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/#respond Mon, 21 Aug 2023 07:37:45 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=191303 Jake Fletcher-Stega, a recent University of Liverpool law grad explores the potential for technology to enhance legal services

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>

Jake Fletcher-Stega, a recent University of Liverpool law grad, explores the potential for technology to enhance legal services

Utilising advancements like artificial intelligence (AI) and chatbots in the UK can greatly boost efficiency and accessibility in the legal system. Legal tech has the potential to substantially elevate the quality of legal services, prioritising client outcomes over traditional methods, which is crucial for advancing the legal field.

Inspired by Richard Susskind’s work (a leading legal tech advisor, author and academic), this article seeks to demonstrate AI’s potential to spearhead advancements in the legal field and provide solutions to the issue of court backlogs currently plaguing the UK system.

 The problem: the overloaded UK court system

Despite our faith in the right to access to justice as a cornerstone of the British legal framework, the reality is that this is far less certain than might appear. Briefly put, access to justice is the ability of individuals to assert and safeguard their legal rights and responsibilities. In 2012, the Legal Aid, Sentencing and Punishment of Offenders Act (LASPO) significantly reduced funding for the UK justice system, resulting in a current backlog of approximately 60,000 cases and leaving many unable to afford representation.

If we are to fix this ongoing crisis, a fresh, unique, and revolutionary solution is required. I suggest that adopting an innovative approach, such as the use of legal technology, could significantly improve access to justice.

 The solution: legal tech

To echo the view of leading academic Susskind, the legal delivery service is outdated and overly resistant to technological advancements. He asserts that the utilisation of artificial intelligence, automation, and big data has the potential to revolutionise the methods through which legal services can be provided and executed. I must reiterate, it isn’t beneficial that our legal sector is overly conservative and technophobic. Other professions have moved forward with technology, but lawyers haven’t.

Lawyers are behind the curve when compared to other sectors such as finance and medicine who are now utilising technology such MicrosoftInnerEye. Law isn’t significantly different from medical and financial advice. Not different enough to deny the value of innovating our legal services.

The belief that the legal field cannot innovate in the same way as other industries due to its epistemological nature is a common misconception. Many argue that AI will never fully replicate human reasoning, analysis, and problem-solving abilities, leading to the assumption that it cannot pose a threat to human professionals whose job primarily involves ‘reasoning’. However, this perspective is flawed.

While AI may not operate identically to humans, its capability to perform similar tasks and achieve comparable outcomes cannot be underestimated. Instead of fixating on the differences in the way tasks are accomplished, we should shift our focus to the end result.

Embracing AI/Legal Tech and its potential to augment legal services can lead to more efficient, accessible, and effective outcomes for clients, without entirely replacing the valuable expertise and experience that human professionals bring to the table. It is by combining the strengths of AI with human expertise that we can truly revolutionise the legal sector and improve access to justice for all.

Want to write for the Legal Cheek Journal?

Find out more

Outcome thinking

As lawyers, we must begin to approach the concept of reform in law through the notion of ‘outcome thinking’. In outcome thinking, the emphasis is on understanding what clients truly want to achieve and finding the most effective and efficient ways to deliver those outcomes. The key idea is that clients are primarily interested in the results, solutions, or experiences that a service or product can provide, rather than the specific individuals or processes involved in delivering it.

For example, instead of assuming that patients want doctors, outcome thinking suggests that patients want good health. Another example is the creation of this article. I used AI tools to help me adjust the language, structure and grammar of this text to make it a smoother read. This is because ultimately as the reader you are only interested in the result and not how I crafted this text.

Lawyers are getting side-tracked

Lawyers fail to grasp the focus of this discussion. To illustrate this, let me share a personal story. Just moments before my scholarship interview at one of the Inns of Courts, I was presented with two statements and asked to argue for or against one of them within a five-minute timeframe. One of the statements posed was, ‘Is AI a threat to the profession of barristers?’ Instead of taking sides, I chose to argue that this question was fundamentally flawed.

My contention was that the more critical consideration should be whether new technology can enhance efficiency in the legal system, leading to more affordable and accessible access to justice. The primary focus of the law should be to provide effective legal services rather than solely securing an income for barristers, just as the priority in medicine is the well-being of patients, not the financial gains of doctors.

When a new medical procedure is introduced, the main concern revolves around its impact on patients, not how it affects the workload of doctors. Similarly, the legal profession should prioritise the interests of those seeking justice above all else.

One example — chatbots

One practical example of legal tech that Susskind suggests is the implementation of a ‘diagnostic system’. This system uses an interactive process to extract and analyse the specific details of a case and provide solutions. This form of frontline service technology is often provided through the medium of a chatbot. As a chatbot can work independently and doesn’t require an operator, it has the potential to streamline legal processes.

To test this, I developed a prototype application that demonstrated the potential of AI to tackle legal reasoning. Using the IBM Watson Assistant platform and academic theory from Susskind & Margaret Hagan, I created a chatbot that assisted a paralegal in categorising a client’s case. Although far from perfect, the project proved that AI can substantially improve the efficiency and quality of our outdated legal services.

Concluding thoughts

This article has attempted to demonstrate how embracing technological innovation can revolutionise the legal profession. By focusing on delivering efficient and client-centric outcomes, the legal sector can improve access to justice and create a more effective system. While challenges exist, proactive adoption of innovative solutions will shape a promising future for law, ensuring its continued role in upholding justice for all.

Jake Fletcher-Stega is an aspiring barrister. He recently graduated from the University of Liverpool and his research interests lie in legal tech and AI.

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/feed/ 0
The blame game: who takes the heat when AI messes up? https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/ https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/#comments Tue, 08 Aug 2023 07:55:57 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=190977 Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

The post The blame game: who takes the heat when AI messes up? appeared first on Legal Cheek.

]]>

Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

lawyers AI robots

Imagine a scenario where an Artificial Intelligence-powered medical diagnosis tool misinterprets critical symptoms, harming a patient. Or consider an autonomous drone operated by an AI algorithm that unexpectedly causes damage to property. As the capabilities of AI systems expand, so too does the complexity of determining the legal responsibility when they err. Who should bear the responsibility for such errors? Should it be the developers who coded the algorithms, the users who deployed them, or the AI itself?

In the world of cutting-edge technology and artificial intelligence, we find ourselves at the cusp of a new era marked by revolutionary advancements and unprecedented possibilities. From self-driving cars that navigate busy streets with ease to sophisticated language models capable of composing human-like prose, the realm of AI is reshaping our lives in extraordinary ways. However, with the awe-inspiring capabilities of AI comes an equally daunting question that echoes through courtrooms, boardrooms, and coffee shops alike — who is legally responsible when AI makes mistakes?

Assigning liability: humans vs AI

Unlike human errors, AI errors can be complex and challenging to pinpoint. It’s not as simple as holding an individual accountable for a mistake. AI algorithms learn from vast amounts of data, making their decision-making processes somewhat mysterious. Yet, the concept of holding an AI legally responsible is not just science fiction. In some jurisdictions, legal frameworks are evolving to address this very conundrum.

One line of thought suggests that the responsibility should lie with the developers and programmers who created the AI systems. After all, they design the algorithms and set the initial parameters. However, this approach raises questions about whether it is fair to hold individuals accountable for AI decisions that may surpass their understanding or intent.

Another perspective argues that users deploying (semi-)autonomously AI systems should bear the responsibility. They determine the scope of AI deployment, its applications, and the data used for training. But should users be held liable for an AI system’s actions when they may not fully comprehend the intricacies of the algorithms themselves?

Is AI a legal entity?

An entity is said to have legal personhood when it is a subject of legal rights and obligations. The idea of granting legal personhood to AI, thereby making the AI entity itself liable for its actions, may sound like an episode of Black Mirror. However, some scholars and experts argue that as AI evolves, it may gain a level of autonomy and agency that warrants a legal status of its own. This approach sparks a thought-provoking discussion on what it means to recognise AI as an independent entity and the consequences that come with it.

Another question emerges from this discussion — is AI a punishable entity? Can we treat AI as if it were a living, breathing corporation facing consequences for its actions? Well, as we know, AI is not a sentient being with feelings and intentions. It’s not a robot that can be put on trial or sent to AI jail. Instead, AI is a powerful technology—a brainchild of human ingenuity—designed to carry out specific tasks with astounding efficiency.

In the context of law and order, AI operates on a different wavelength from corporations. While corporations, as “legal persons,” can be held accountable and face punishment for their actions, AI exists in a unique domain with its own considerations. When an AI system causes harm or gets involved in something nefarious, the responsibility is not thrust upon the AI itself. When an AI-powered product or service misbehaves, the spotlight turns to the human creators and operators—the masterminds who coded the algorithms, fed the data, and set the AI in motion. So, while AI itself may not be punished, the consequences can still be staggering. Legal, financial, and reputational repercussions can rain down upon the company or individual responsible for the AI’s misdeeds.

Want to write for the Legal Cheek Journal?

Find out more

Global policies and regulations on AI

In the ever-evolving realm of AI, a crucial challenge that arises is ensuring that innovation goes hand in hand with accountability. Policymakers, legal experts, and technologists have to navigate uncharted territory, and face the challenge of crafting appropriate regulations and policies for AI liability.

In 2022, AI regulation efforts reached a global scale, with 127 countries passing AI-related laws. There’s more to this tale of international collaboration. A group of EU lawmakers, fuelled by the need for responsible AI and increasing concerns surrounding ChatGPT, called for a grand summit in early 2023.They summoned world leaders to unite and brainstorm ways to tame the wild stallion of advanced AI systems.

The AI regulation whirlwind is swirling with intensity. Stanford University’s 2023 AI Index proclaims that 37 AI-related bills were unleashed into the legal arena worldwide. The US charged ahead, waving its flag with nine laws, while Spain and the Philippines recently passed five and four laws, respectively.

In Europe, a significant stride was taken with the proposal of the EU AI Act nearly two years ago. This Act aims to classify AI tools based on risk levels, ensuring careful handling of each application. Moreover, the European Data Protection Board’s task force on ChatGPT signals growing attention to privacy concerns surrounding AI.

The road ahead: what should we expect?

 As we journey toward a future shaped by AI, the significance of policies regulating AI grows ever more profound. In this world, policymakers, legal experts, and technology innovators stand at the crossroads of innovation and ethics. The spotlight shines on the heart of the matter: determining the rightful custodian of AI’s mistakes. Is it the fault of the machines themselves, or should the burden fall upon their human creators?

In this unfolding saga, the road is paved with vital decisions that will shape the destiny of AI’s legal accountability. The future holds an alluring landscape of debates, where moral dilemmas and ethical considerations abound. Striking the right balance between human ingenuity and technological advancement will be the key to unlocking AI’s potential while safeguarding against unintended consequences.

Concluding thoughts

As we continue to embrace the marvels of AI, the captivating puzzle of legal accountability for AI errors looms large in this ever-evolving landscape. The boundaries between human and machine responsibility become intricately woven, presenting both complex challenges and fascinating opportunities.

In this dynamic realm of AI liability, one must tread carefully through the legal intricacies. The answers to who should be held accountable for AI errors must be reached on a case-by-case consideration. The interplay between human intent and AI’s decision-making capabilities creates a nuanced landscape where the lines of liability are blurred. In such a scenario, courts and policymakers must grapple with novel scenarios and evolving precedent as they seek to navigate this new challenge.

Megha Nautiyal is a final-year law student at the Faculty of Law, University of Delhi. Her interests lie in legal tech, constitutional law and dispute resolution mechanisms.

The post The blame game: who takes the heat when AI messes up? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/feed/ 1
What does digital transformation mean for women in law? https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/ https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/#comments Thu, 12 Jan 2023 11:42:04 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=183206 MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

The post What does digital transformation mean for women in law? appeared first on Legal Cheek.

]]>
MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

Gender inequalities in women’s career advancement and the resulting gap in leadership positions in law firms are by no means new phenomena. While so much has changed since the times when women were denied to practise law on the grounds of their sex, the so-called glass ceiling is still a thing in today’s legal industry, which makes it much harder for women to climb the career ladder.

That being said, the legal profession itself is in a period of profound change which, driven by technology and innovation, might change the picture for women in law, along with many other things in the legal industry. The potential changes the adoption of advanced technologies could bring about to legal practice have already been, and continue to be, discussed by many. Yet, nothing much has been said on how those changes might affect women in legal practice and the existing gender gap in the legal industry.

On the face of it, technology might help bridge the current gender gap by introducing new forms of working, removing time and place constraints, and fostering the emergence of brand new legal roles. One of the major advantages of the adoption of technology in the legal industry is flexibility, which provides legal professionals with the opportunity to carry out their work outside the workplace. Endorsement of technology in law firms might also facilitate the improvement of working hours by introducing positive changes to the billable hour system through the allocation of certain time-consuming and repetitive tasks to legal technology tools. These could make a significant difference in terms of work-life balance and job retention, particularly for women with parental responsibilities, making it easier to maintain a balance between work and family life.

Moreover, technology, more specifically algorithms, might help provide an impartial consideration process for women by mitigating discriminatory treatment they might face during recruitment, promotion and task allocation processes. However, algorithms, too, have the potential to perpetuate existing inequalities and exclusion in the legal industry. Contrary to common belief, algorithmic decision-making could also be impaired by biases as algorithms are developed and trained by humans and learn from existing data that might underrepresent or overrepresent a particular category of people. Given that employment decisions made by algorithmic systems are often based on companies’ top performers and past hiring decisions, the overrepresentation of men in male-dominated industries might very well lead to favouring male candidates over females.

A perfect example of this is the tech giant Amazon’s experimental hiring algorithm, which is said to have preferentially rejected women’s job applications on the grounds of the company’s past hiring practices and applicant data. The company’s machine learning system assessed the applicants based on the patterns in previous applications and since the tech industry has been male dominant, the system penalised resumes containing the word “women’s”. While Amazon said that the system was never used in evaluating candidates, the incident suggests that this type of system exists and might already be used by some firms in employment decisions.

Want to write for the Legal Cheek Journal?

Find out more

The most worrying part of algorithmic discrimination, which could have an aggravating effect on the gender gap, is the scale of possible impact. While the effect of discriminatory decisions by humans is often limited to one or a few persons, a discriminatory algorithmic decision could affect a whole category of people and lead to cumulative disadvantages for those who fall into that category, in this case, women. The fact that algorithmic systems are often kept as trade secrets and protected by intellectual property rights or firms’ terms and conditions complicates things further, making it harder to detect and mitigate such discriminatory treatment.

Technology might also have more indirect implications for women in legal practice, mainly through a shift to automation and the take-over of certain legal tasks by technology tools. The ongoing trend toward automation is expected to cause unprecedented changes in the legal industry to the extent that some legal roles might entirely disappear, while new ones are emerging.

In a 2016 report, Deloitte predicted that 114,000 jobs in the legal sector were at risk to be lost to technology in the next two decades. Junior lawyers were expected to be the ones who would be most impacted by this trend due to the relatively less intellectually demanding nature of their roles. This bleak forecast has been supported by a study from the Law Society of England and Wales, which predicted a fall in employment of 13,000 legal professionals by 2027, with legal secretaries and office support roles being at a higher risk of replacement by technology. Whilst neither of the analyses discussed the issue from a gender-specific perspective, occupational division in the legal industry indicates that women are likely to be more affected by the accelerating adoption of technology in law firms.

According to the Law Society Gazette, women accounted for 61% of solicitors and 52% of lawyers in UK law firms in 2021, while only 35% of law firm partners were female. Another data on the occupational gender structure from LinkedIn Talent Insights showed that women constituted 87% of paralegals and 54% of staff attorneys in US corporate and commercial practice in 2019, as opposed to 23% of law firm partners. These indicate a significant gender gap in the career hierarchy, with the leadership positions being male dominant, whereas women mostly occupy entry-level positions, which are more likely to be impacted by automation. While it is only the tip of the iceberg, the data signals that any possible replacement of legal roles by technology would likely broaden the already wide gender gap in the legal industry.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post What does digital transformation mean for women in law? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/feed/ 2
The impact of AI on copyright law https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/ https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/#respond Tue, 20 Dec 2022 08:52:44 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=182394 Following public excitement around 'ChatGPT', aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

The post The impact of AI on copyright law appeared first on Legal Cheek.

]]>
Following public excitement around ‘ChatGPT’, aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

At the moment it seems like the whole world is shocked to discover the colossal leap artificial intelligence (‘AI’) has made, since the release of ‘ChatGPT’ by the Elon Musk-founded company OpenAI last month.

I asked the newly-released software what effect it thinks AI will have on UK copyright law, and this is what it had to say:

“It is difficult to predict the exact impact that AI will have on copyright law in the UK. However, it is likely that the use of AI in creative industries, such as music and art, will raise questions about the ownership of works created by AI. There may also be discussions about whether AI itself can hold copyrights for works it creates. It is possible that the UK government will need to update copyright laws to address these issues.”

I must stress there was no human involvement in the creation of that answer and it has not been edited. When computers are capable of creating coherent and reasonable answers to complex questions, it’s only a short step to their ability to author literary works of their own. This specific computer programme specialises in its understanding of language and has the potential to change the face of many industries. It follows OpenAI’s previous AI image generator, ‘DALL-E 2’, which was capable of instantly generating artwork including photo-realistic images based on user prompts.

Copyright laws allow the creator of a work to be the sole owner of that work, therefore they have the sole rights to sell or reproduce their idea. These rights can be claimed by the author of the work under section 9 of the Copyright, Designs and Patents Act 1988 (‘CDPA’) which describes an author as the person who “created” the work. This work could be: literary work (e.g. a book or script), entrepreneurial work (e.g. a film or sound recording), or other works which are detailed in the Act. The Act itself considers in the instance of a literary work being computer-generated “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” This is a confusing assortment of words that essentially means the author of a work written by an AI will be the writer of the prompt that encouraged the AI to write it.

Want to write for the Legal Cheek Journal?

Find out more

Different categories of copyright works have different requirements to be protected. For example, entrepreneurial works have no requirement for originality, in contrast to literary works which section 1 CDPA requires are “original”. The meaning of original is undefined in the Act but is understood to mean the original work of the author — this conflicts with the provisions under section 9 which allow the author to take credit for the computer-generated work in spite of it not being their own work.

Some suggest it would be a logical solution for a computer-generated work to be held separately to a human-written piece as an entrepreneurial work as opposed to a literary one. This would be similar to how the law treats sound recordings and musical copyright which are substantially the same but with a difference in authorship requirements and, consequently, a difference in the level of protection afforded to them.

Whilst others question whether AI-created works should be entitled to copyright protection at all. Eventually, this school of thought boils down to understanding the fundamental purpose of intellectual property law. Ultimately, when a human protects their work this is because they want to be the sole beneficiary of the products of their own time, effort and imagination. A computer-generated text, song or artwork does not derive from the same process, consequently, why should it be afforded the same protection?

On the other side of the coin, the implications of AI are not limited to computer-generated literature flooding the shelves of bookshops and AI art hanging on the walls of the Louvre. Machine learning algorithms are already being implemented by companies such as YouTube to automate the process of copyright enforcement. These algorithms can quickly and accurately scan vast amounts of content, comparing it against known copyrighted works to identify potential infringements. This has made it easier for copyright holders to enforce their rights and protect their works from unauthorised use, but has also raised concerns about the potential for false positives and other errors in the process.

Overall, the impact of AI on copyright is clearly complex and multi-faceted. While the technology has brought about many positive changes, including making it easier to identify and enforce copyright infringement, it has also raised a number of challenging legal and ethical issues. As AI continues to advance and become more widely adopted, it is obvious that these issues will continue to evolve.

The UK is in the minority when it comes to recognising the early potential for the composition of copyright works without the need for a human author and legislating on it. Many other jurisdictions, such as the USA, will have issues with this growing technology now the public have free access to this tool. In the USA, for a copyright to be satisfied, cases have established the work must be created by a human author using a modicum of creativity. It’s hard to say which approach will stand the test of time but it is obvious that the foundations have been laid for a new normal for creative industries.

Jonathan Binns is an aspiring barrister and recent law graduate, currently undertaking the BPC at The University of Law, Leeds.

The post The impact of AI on copyright law appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/feed/ 0
The future is driverless https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/ https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/#respond Thu, 04 Aug 2022 08:28:44 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=178107 Our driving laws are not geared up for the possibilities of driverless vehicles, but could the Law Commission have found a way to steer through the obstacles?

The post The future is driverless appeared first on Legal Cheek.

]]>
Our driving laws are not geared up for the possibilities of driverless vehicles, but could the Law Commission have found a way to steer through the obstacles? MSc student and qualified Turkish lawyer, Öznur Uğuz looks at proposals for reform

The Law Commission of England and Wales, which advises the government on law reform, published an issues paper on the law surrounding remote driving, Remote driving, in June 2022. It examines the existing law regarding remote driving and possible reforms, sets out and briefly analyses legal issues arising from the current law, and poses questions in order to invite responses from the public.

Remote driving — a technology that enables a person to drive a vehicle from a remote location rather than from within the vehicle — is a hot topic currently attracting a lot of interest from industry and business. It is already being used in the UK in off-road environments, particularly in farming, and is increasingly being trialled for on-road use. Its attractions include the ability to operate in hazardous environments, such as a mine, quarry or warehouse, and provide more feasible delivery. There are many challenges, however, particularly in terms of safety, liability and compliance.

How safe is it?

From a legal aspect, arguably the most important safety risks posed by remote driving are related to cybersecurity and terrorism. Some serious concerns on the matter include possible takeovers of remotely operated vehicles by hackers as well as their use as “a terrorist weapon”. Other risks referred to in the Law Commission paper relate to connectivity, situational awareness and maintenance of the driver’s focus.

At present, there is no specific legal and regulatory framework for remote driving, which requires existing laws and regulations to apply to remote driving systems and their operation. However, the adequacy of existing rules when applied to emerging technologies is highly questionable as these rules were formulated when the current level of technology and its possible implications could not be imagined.

The current law

The Commission has identified some construction and use regulations which may cause problems in the remote driving context. Those potentially problematic provisions are contained in the Road Vehicles (Construction and Use) Regulations 1986 and their breach is an offence under the Road Traffic Act 1988. The provisions are below as outlined by the Commission.

Regulation 104

Regulation 104 requires a driver to have proper control of the vehicle and a full view of the road and traffic ahead. The paper points out that Regulation 104 does not necessarily require the driver to be in the vehicle. This suggests that as long as the driver has a full view of the road and traffic, whether through the use of connectivity, the full view requirement can be met. As emphasised by the Commission, the more difficult issue with the provision is what amounts to proper control as it is unclear whether proper control refers to the type of control a conventional driver would have or can the requirement be satisfied also by a person undertaking only part of the driving task.

Regulation 107

Regulation 107 prohibits leaving a vehicle unattended unless the engine is stopped and the parking brake set. The Commission seems to have found the regulation mostly compatible with remote driving in light of the case law, according to which the driver does not need to be in the vehicle as long as they are in a position to observe it. This suggests that a vehicle may still be “attended” by a person who is near the vehicle or in a remote-control centre. However, the provision may still be breached when a remote driver cannot see the vehicle or is not in a position to prevent interference with it.

Regulation 109

Regulation 109 prohibits a driver from being in a position to see a screen displaying non-driving related information. The Commission has found the information regarding the driving environment and the vehicle such as route and vehicle speed, which is displayed to a beyond line-of-sight driver through a screen, driving-related and thus permitted under regulation. Still, they have noted that information that developers might wish to display may go well beyond what is permitted.

Want to write for the Legal Cheek Journal?

Find out more

Regulation 110

Regulation 110 forbids the use of hand-held devices whilst driving, including mobile phones. In the Paper, Regulation 110 has been found “potentially problematic” for “line-of-sight” driving, where a person walks alongside a vehicle and controls its speed or direction through a hand-held device. Such a person would technically be a driver who is using a hand-held device whilst driving, which would be a breach of the regulation even though the original aim of the regulation was to prevent the distractory use of mobile phones while driving.

Problems with the current law

The Commission is concerned that these uncertainties with the current law might hinder the development of some potentially valuable projects or could be exploited to put on the road systems which are not ready in terms of quality and safety. Accountability for poor remote driving is also an issue as the main responsibility currently lies with the driver, which may result in injustice, particularly where the driver has little control over the key aspects of the operation. Such that, a remote driver could face criminal prosecution for a serious offence in the event of an accident, even if they were not at fault.

Under the existing law, a remote driver also bears responsibility for the roadworthiness of the vehicle and would be liable even if they were not in a position to know the unworthiness of the vehicle. While the driver’s employer could also be prosecuted in that case, the offence the employer could face would be relatively minor.

Civil liability

Civil liability may also become an issue in the remote driving context. As a possible example, it might be difficult to determine the person liable for damage in the event of an accident if the problem lies in connectivity or some other latent defect rather than with the driver. While already difficult to determine how and who caused the harm, it might even be more complex when it comes to new technologies involving multiple actors and internal components.

In terms of insurance, the Commission refers to the UK’s compulsory motor insurance scheme. As originally required by the Road Traffic Act 1930, third-party motor insurance is compulsory for anyone who uses a motor vehicle on the road. While for a conventional vehicle identifying the person who is using the vehicle is quite easy, the situation is complex in the remote driving context. The paper refers to the case law and states that in that case, both the driver and the organisation that employs them would be “using” the vehicle, meaning they would both need to be insured against potential liability.

In the event of damage, the organisations that employ the remote driver would be liable for both; their own wrongdoing in operating an unsafe system, and the faults of the drivers they employ. In addition, employers would be responsible for any defect in the vehicle or in the remote driving system. Still, a situation might be much more complicated than that, especially where a remote driving feature of a vehicle has been designed by one organisation but operated by another, or where a remote driver is a subcontractor rather than an employee.

Remote driving from abroad

One of the most interesting issues in terms of remote driving is the legal implications that might stem from the operation of a remotely driven vehicle on UK roads from abroad. There is currently no international standard on the regulation of remote driving, which might lead to serious challenges in the event of damage, particularly in terms of compliance with driving regulations and law enforcement. In addition to possible delays and high expenses, tracking down evidence of an accident which has occurred abroad but involved a vehicle operated remotely from the UK would be practically difficult.

Another important concern in the international context is whether a remote driver from another jurisdiction who is driving a vehicle on UK roads would need a UK driving licence. As explained in the paper, as a contracting party to the Vienna Convention on Road Traffic Act 1968, the UK is normally obliged to recognise a driving licence issued by another country that is a contracting party to the Convention until the driver becomes normally resident in the UK. Still, as remote driving involves higher safety risks than conventional driving, this existing provision might not be directly applicable in the remote driving context and an amendment or reform might be needed.

Conclusion

Concerns and possible implications of remote driving technology are much wider than these and require a lot of time and thinking from experts to be managed. However, the paper makes it clear that the current law needs at least amendments to provide clear rules to follow when dealing with remote driving technology so that efficient solutions could be produced to any problems the technology might pose. Regarding any future legal reform, the Law Commission is expected to set out possible options and publish advice to the UK government early in 2023, after receiving responses to its paper.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post The future is driverless appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/feed/ 0
Welcome to the futuristic world of the Decentralised Autonomous Organisation https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/ https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/#respond Wed, 06 Jul 2022 08:20:06 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=177218 Can old laws govern these radical creations? MSc student and qualified Turkish lawyer, Öznur Uğuz investigates the mysterious entities known as DAOs and finds they have a lot to offer

The post Welcome to the futuristic world of the Decentralised Autonomous Organisation appeared first on Legal Cheek.

]]>
Can old laws govern these radical creations? MSc student and qualified Turkish lawyer, Öznur Uğuz investigates the mysterious entities known as DAOs and finds they have a lot to offer

A Decentralised Autonomous Organisation (DAO) is a new form of digital blockchain-based organisation that is owned and governed by a community organised around a set of self-executing rules encoded on smart contracts. Members are issued with “tokens”, which grant voting rights on the governance of the organisation. Lacking a central authority, DAOs offer a more transparent and democratic decision-making process that extends their potential business applications from financial transactions and company management to secure voting, crowdfunding and charities.

Since DAOs rely on smart contracts, the risk of self-dealing and fraudulent behaviour by members is also very low compared to traditional forms of organisations. By digitally bringing together groups of people from different backgrounds and physical locations, DAOs promise to facilitate access to markets and set a new milestone in digitalisation in the age of globalisation. However, all these benefits come at a cost of legal uncertainty. The unique self-governing and decentralised structure of DAOs raises several questions ranging from the legal status and governance of the organisation to the extent of liability of its members, which cannot be answered using existing legal instruments.

Goodbye to the hierarchy

Unlike traditional corporate entities, DAOs do not have a hierarchical structure or a centralised authority but they rely on a democratic voting system and/or smart contracts operating on blockchain technology as their source of governance. DAOs can be categorised under two main types: “algorithmic DAOs” and “participatory DAOs”. Participatory DAOs are managed by the consensus of members through smart contracts. Each member in a DAO has the right and ability to participate in the DAO´s governance by voting on a proposal made by another member or by initiating a new one themselves. Those proposals that are supported by a prescribed portion of members are adopted and enforced by the rules coded in smart contracts. On the other hand, algorithmic DAOs aim to be entirely governed by smart contracts dictating the entire functionality of the organisation.

Who’s liable?

While fully autonomous DAOs would eliminate problems caused by human misconduct, they would give rise to additional legal issues. For DAOs, there is always a risk that the underlying software has defects such as bugs or security vulnerabilities, of which impact might be greater for algorithmically-managed DAOs where no human intervention is possible. Since smart contracts execute automatically, they are almost impossible to be modified and any changes would require the entire contract to be cancelled and redrawn. This would particularly be an issue in the event of an organisational crisis, unlawful enforcement or a regulatory concern, hampering the organisation’s ability to react in time. What is more, determining the liable party in case of a dispute might be more difficult for fully autonomous DAOs that involve multiple actors and internal components of complex data and algorithms which make it complicated to establish whether the damage was triggered by a single cause or was a result of an interplay of a number of different factors.

Want to write for the Legal Cheek Journal?

Find out more

Another outstanding concern over DAOs is the personal and unlimited liability of their members for the organisation’s acts, debts and obligations, which is an issue resulting from the lack of an established legal framework. From a legal perspective, DAOs are deemed as unincorporated entities with no corporate form or protection against liability as DAOs do not follow legal formalities of incorporation such as registration, bylaws, and contracts. In the United States, DAOs are likely to be treated as “general partnerships” that lack the ability to provide liability protection to their members. Since they do not have the usual protection against liability enjoyed by members of limited liability companies, who only risk their capital contribution, in case of a lawsuit, a DAO’s members would be fully liable including their personal assets until the claim is satisfied. Thus, if classified as a general partnership, a DAO may lose potential members who would otherwise support the DAO but worry that membership would put their assets at risk.

The exceptions

Currently, there are only a few exceptions to DAOs’ ambiguous legal status. The first step towards establishing a legal framework for DAOs in Anglo-American law was taken by the state of Vermont in 2018. Under Vermont’s Limited Liability Company Act, a DAO can register as a Blockchain Based LLC (BBLLC), and thereby gain an official legal status that allows it to enter into contractual agreements and offer liability protection to its members. Still, the legal recognition of DAOs as separate legal entities came later by the State of Wyoming, which has become the first state in the United States to legally recognise DAOs as a distinct form of limited liability company. Bill 38, which took effect on 1 July, 2021, has created a new form of legal entity called DAO LLC or LAO that provides LLC-like liability protections to DAOs that register as such. However, the law is essentially an amendment to the current Wyoming Limited Liability Company Act and does not create any new protections that are not available in existing LLC structures, while imposing new obligations on DAOs. Moreover, the law raises serious concerns primarily on whether registering as a Wyoming LLC would diminish the fundamental “decentralised” aspect of a DAO.

Under the act, a Wyoming DAO LLC can either be member-managed or algorithmically managed. Yet, an algorithmically-managed DAO can only be formed if the underlying smart contracts are capable of updates or modifications, meaning that DAOs formed under the Wyoming LLC Law will have to maintain some modicum of centralisation and human control. In addition, the law requires DAO LLCs to maintain a presence in the state of Wyoming through a registered agent, which may also temper the decentralised nature of DAOs. Having said that, the impact of the law is likely to be limited given the state’s small population and minimal ties to the financial industry. Overall, although Vermont’s BBLLC and Wyoming’s DAO LLC represent steps forward towards the development of a legal framework for DAOs, absent recognition at the federal level and significant clarity around the different forms of DAOs, these solutions would not be sufficient to overcome the current ambiguity.

Locating the DAO

From a jurisdictional point of view, a DAO’s decentralised form might also make it challenging to find the applicable law and jurisdiction in the event of a dispute. In the majority of legal systems, the applicable jurisdiction for entities is determined with reference to the place of incorporation of the organisation or the place where key managerial decisions of such organisations are taken. However, DAOs have neither a country of incorporation nor a place of administration. Unlike traditional software applications that reside on a specific server controlled by an operator assigned to a specific jurisdiction, DAOs run on a decentralised blockchain system and are collectively managed by a distributed network of members who can be from anywhere in the world.

In some cases, applicable law and jurisdiction might be determined based on the other contractual party or the creator of the DAO code. In terms of applicable law, it might also be possible to apply the law of the state or jurisdiction within which a lawsuit has been instituted. Still, as there is no established rule, litigants may have to bring actions in several jurisdictions to be able to obtain legal protection, and initiating a legal dispute against a DAO may become very impracticable and cumbersome in many senses.

These concerns aside, DAOs have the potential to elevate international business by creating a truly global and decentralised corporate structure that could effectively function without the need for hierarchical human management. If they reach their full potential and achieve mainstream adoption by overcoming the outstanding legal and regulatory challenges, they can accelerate the development of inclusive markets by easing access and can create novel business and cooperation opportunities that would not otherwise be possible.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post Welcome to the futuristic world of the Decentralised Autonomous Organisation appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/feed/ 0
Is the smart money on ‘smart contracts’? https://www.legalcheek.com/lc-journal-posts/is-the-smart-money-on-the-smart-contracts/ https://www.legalcheek.com/lc-journal-posts/is-the-smart-money-on-the-smart-contracts/#comments Fri, 18 Mar 2022 09:55:49 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=173378 Law student Tanzeel ur Rehman considers some of the drawbacks of self-executing agreements

The post Is the smart money on ‘smart contracts’? appeared first on Legal Cheek.

]]>
Law student Tanzeel ur Rehman considers some of the drawbacks of self-executing agreements

Hegestratos, unlike the many trailblazing Ancient Greek philosophers, was a pioneering fraudster.

In one of the earliest recorded incidents of financial fraud (c.300 B.C.), this corn merchant had taken out a large insurance policy (known as bottomry) and attempted to swindle the insurers by sinking his ship along with the crew. Down on his luck, he was caught in the act, and drowned while trying to escape the wrath of his intended scapegoats. These bottomry bonds, an offshoot of the ancient Code of Hammurabi, were also the earliest known forms of insurance contracts, where merchants could take out loans to finance a voyage by pledging their large vessels as collateral.

Today, the insurance world is a multi-trillion-dollar industry. The market in the UK is the largest in Europe and fourth largest in the world. Be it the minor ‘fender-benders’ or any other ‘named perils’, insurance policies have got you covered.

Discussions are picking pace regarding the role of technology in this rapidly growing industry. The concept of ‘InsurTech’ is gaining momentum. Proponents believe that the use of ‘smart contracts’ and blockchain/distributed ledger technologies (DLTs) could revolutionise the insurance market. Last year, the European Insurance and Occupational Pensions Authority (EIOPA) published a discussion paper which highlights the potential benefits of these technologies.

However, there is a dearth of research regarding the possible downside of smart contracts. Smart contracts are touted to be one of the most groundbreaking innovations of our times. If that is the case, then surely smart contracts have the potential of undesirable consequences as well (See the ‘Pathetic dot theory’ in Professor Lawrence Lessig’s Code: And Other Laws of Cyberspace). The exaggerated optimism surrounding smart contracts overlooks the fact that these could possibly pave way for, hitherto unknown ways of transacting unlawfully.

This is partly because there is emphasis on a more generalised, and less technical understanding of these technologies. As Andrés Guadamuz explains in his 2019 paper, “smart contracts are not contracts” and that “for all intents and purposes they should not be”. Smart contracts are in fact self-executing code that, unlike traditional contracts, may or may not be of a binding nature. An article published on this website points out that “self-executing agreements written in code are not a panacea to businesses’ and individuals’ contracting woes”.

Want to write for the Legal Cheek Journal?

Find out more

Smart contracts have unique vulnerabilities such as performance issues, security threats and privacy concerns. The performance issues that have been highlighted by tech experts include inter-alia throughput bottlenecks, limited scalability, and transactions latency. The security concerns relating to smart contracts are also well-founded. The Decentralised Autonomous Organisation (DAO) attack that exploited a re-entrancy vulnerability to steal around two million Ether from a smart contract, serves as an eye-opener. The attack on SmartBillions — a decentralised and transparent lottery system, exhibits how blockchain ‘hashes’ could be manipulated. These attacks show that DLTs are vulnerable to re-entrancy and event-ordering manipulations.

Another challenge that smart contracts face is compliance with data protection rules. For example, the European General Data Protection Regulation (GDPR) stipulates that citizens have a “right to be forgotten” which is inconsistent with the immutable nature of blockchain-enabled smart contracts. Research reveals that even where a smart contract becomes a legal contract, it may not be between the data subject and the controller. Furthermore, consent has limited value here, as under the EU data protection regime, the data subject must be able to revoke consent, which becomes impossible when data processing cannot be halted at the request of the data subject.

Moreover, Article 22(3) of GDPR requires that data controllers ensure measures including a right to human intervention. Uncertainties and controversies regarding the scope of this obligation, remain a relevant theme. Compliance with these regulations implies that the smart contracts’ “trustless” framework will regress to a third-party trusted network, losing its essence.

Another dimension of the third-party interference in smart contracts is the use of “off-chain” resources (or Oracles). Smart contracts require receiving off-chain information, at pre-determined intervals, from resources which are not on the blockchain. The potential issues linked to this may be the inability to push out the necessary information or provide incorrect data. Moreover, this could also facilitate blockchain nodes to be hacked or misused to report erroneous data that will be logged on the blockchain in an immutable manner.

Smart contracts also have the potential of opening the floodgates to a new class of “collusive agreements”. DLTs are poised to challenge antitrust enforcement by employing illegal practices and circumventing rules more efficiently through smart contracts. Blockchain as a medium to facilitate anticompetitive practices will pose interesting questions pertinent to §.1 of the Sherman Act (in the American context), §.2(1) of the Competition Act (in the British context) and Article 101(1) of the TFEU (in the European context). Smart contracts could assist companies in a “conscious commitment to a common scheme” (Monsanto v Spray-Rite Serv. Corp.). Smart contracts could also turn into ‘concerted practices’ between companies by “coordination between undertakings which,…knowingly substitutes practical cooperation between them for the risks of competition” (Imperial Chemical Industries Ltd v Commission of EC).

In the insurance industry, a mutual lack of trust between actors is a huge challenge. One cannot blame them, taking into account the long-standing and shady history of enterprising crooks like Hegestratos. Whether the grandiose claims of ‘trustlessness’ and ‘transparency’, dubbed as the hallmarks of smart contracts, will provide the desirable solutions, remains to be seen. In 2020, investments into technology-enabled insurance solutions stood at a staggering €6 billion and the amount invested is expected to increase exponentially. When critically analysing the potential vulnerabilities and drawbacks of these technologies, a burning question could (or should) be, whether the smart money is on the ‘smart contracts’?

Tanzeel ur Rehman is a second year law student at the University of Sindh, Pakistan.

The post Is the smart money on ‘smart contracts’? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/is-the-smart-money-on-the-smart-contracts/feed/ 5
The rights and wrongs of life in the metaverse https://www.legalcheek.com/lc-journal-posts/the-rights-and-wrongs-of-life-in-the-metaverse/ https://www.legalcheek.com/lc-journal-posts/the-rights-and-wrongs-of-life-in-the-metaverse/#comments Mon, 28 Feb 2022 09:13:37 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=172987 Future trainee William Holmes takes stock of the various legal issues emerging in the virtual world

The post The rights and wrongs of life in the metaverse appeared first on Legal Cheek.

]]>
Future trainee William Holmes takes stock of the various legal issues emerging in the virtual world

In his 1929 presentation to the Royal Society of Arts, the artist Tom Purvis concluded: “we hear lots of talk about artists not being business men; but what I should feel grateful for, and I think commerce would benefit greatly by, would be more business men who were artists; that is to say, let us have more artistic understanding in commerce and there will be much more commerce in art”.

Almost a century later, the business world has become acutely conscious of art’s commercial use. The metaverse, a virtual 3D space in which individuals can interact with one another, is one of the most daring examples of this. But, I wonder what Purvis, who is famous for combining the artistic and the commercial in his eye-catching advertising posters, would have thought of the metaverse? Would he perhaps feel that business had been carried away by the artistic?

When doing business in the metaverse, it is worth asking what you really own. Andres Guadamuz’s excellent blog underlines the importance of considering the terms and conditions of a metaverse provider’s licencing agreement when figuring out what you own in a virtual world. He sketches out three broad scenarios.

The first is a platform model, where the company providing the service owns everything in the virtual world. The second is a more generous version of the platform model, where the service provider allows users to own what they purchase or create in the metaverse. This has been adopted by Second Life which grants its metaverse users rights in the front-end code and content they own or create, whilst retaining rights to the back-end code. And the third is a community model, where the users own everything in the metaverse and a user’s stake is determined by their possession of digital tokens. As Guadamuz points out, the first two categories dominate at the moment, whilst the third model is the end-goal for ‘Web3’ virtual worlds, like Decentraland and Axie Infinity. In short, the contents of the end user licence agreement lay out the basic limits of what can be owned in the metaverse.

This means picking the virtual world with the right licence agreement is essential. The prospect of owing digital property in virtual worlds (combined with the frenzy for NFTs) is driving the interest of the likes of JP Morgan. And the American banking titan wants to take it even further. It reports that “the virtual real estate market could start seeing services much like in the physical world, including credit, mortgages and rental agreements”. But how realistic are JP Morgan’s hopes of applying land law to digital property? Well, the ground seems a little shakier regarding the legal status of this ‘property’.

English courts are steadily developing a patchwork approach to ‘virtual property’. Recent case law (following the initial precedent set in AA v Persons Unknown) has seen cryptoassets recognised as property for the purposes of the Proceeds of Crime Act 2002 (DPP v Breidis and Reskajs) and being deemed capable of forming the subject matter of a trust (Wang v Darby). However, it is not a clean sweep with the digital currency Bitcoin being considered too volatile to be used as security for a defendant’s costs (Tulip Trading Ltd v Bitcoin Association for BSV and Others). This could have ramifications for the risks in relation to the issuance of legal mortgages that JP Morgan hopes to take over digital assets in a volatile virtual real estate market. As Gilead Cooper QC indicates, commercial interest may drive the need for a more coherent and comprehensive legislative structure in the future (as Cooper puts it, a “Law of Virtual Property Act” of sorts).

Intellectual property, however, provides a more developed approach to property in the metaverse. This month has seen the luxury fashion brand Jonathan Simkhai release their 2022 collection in Second Life and Roblox recently ran its first Annual Paris World Fashion Show, spearheaded by Paris Hilton. Decentraland has also announced a four-day fashion week that will take place in March. These events have also attracted advertising such as Boohoo who recently bought rights to advertise on virtual billboards at the Paris World Fashion Show.

Want to write for the Legal Cheek Journal?

Find out more

You can tell a lot about the commercial ambitions for the metaverse from trademark applications. Take RTFKT, the digital collectables company that sold over $3 million worth of virtual sneakers in less than five minutes and was acquired by Nike in December 2021. Its trademark applications include an array of virtual sports clothing, fashionwear and equipment.

But more interestingly, they also cater for the possibility of cross-selling physical goods, filing for protection over “custom manufacture and custom 3D printing” for many of these items. Furthermore, they seem to share JP Morgan’s mindset to some extent, demonstrated by applications for trademark rights which allow for the “leasing of digital content [… and] leasing of reproduction rights of digital content”. It is also clear from Nike’s redrafting of its trademark applications, which now explicitly state the desire for protection against “counterfeiting, tampering, and diversion”, that there will be pressure on metaverse providers to police and actively enforce these rights.

Another angle that is appealing to metaverse users is obtaining IP rights over their avatars. This is not unprecedented. In 2008, Alyssa LaRoche successfully trademarked her Second Life avatar ‘Aimee Weber’. You might also seek to copyright your avatar, given the success some have had in copyrighting fictional characters in certain jurisdictions. Notably, in the US the Batmobile, Mickey Mouse, Donald Duck and Sherlock Holmes (which has seen recent litigation over the Netflix film Enola Holmes) have all enjoyed copyright protection.

Based on this influential US jurisprudence, there are broadly two legal hurdles to be overcome. First, the character must be central to the story more than just “vehicles for the story told”. Accordingly, in Warner Bros. v Columbia Broadcast Systems the main character Sam Spade did not receive copyright protection, but in Universal City Studios v Kamar Industries, the character E.T. in Steven Spielberg’s film was copyrightable. Second, the character must be sufficiently distinctive. As the judge noted in Nichols v Universal Pictures, “the less developed the characters, the less they can be copyrighted”.

This is a whistle-stop tour of the commercial rights that are potentially available in the metaverse. But everything gets more complicated when we consider the wrongs of the metaverse. Reports of some users experiencing harassment and occurrences that would likely have incurred legal consequences if they were to have taken place in the real world, has left many wondering how tort or criminal law might apply to the metaverse, or if there may be some circumstances in which an avatar could be granted legal standing. Again, this is not a new question either specific to avatars (the 2008 incident where Ailin Grief who made her fortune buying and developing land in Second Life, saw an interview involving her 3D avatar ‘Anshe Chung’ derailed by a hacker who peppered the virtual set with images of “delicately-animated flying penises” for 15 minutes raised such concerns) or more generally in relation to online harms.

One option might be to grant avatars some form of legal personality. Questions of distinguishing the user and the avatar have arguably already been explored by existing company law (see, for example, how the courts distinguished between Mr Lee and his company in Lee v Lee’s Air Farming). A system which, similar to company registration, could be set up to allow users to register their avatars’ separate legal personality.

In some instances, common law could fill the gap. For example, the courts might consider broadening its definition of ‘the danger zone’ for primary victims (White v Chief Constable for South Yorkshire Police) for those suffering psychiatric injury as a result of interactions in the metaverse. Or they could treat a metaverse user as a secondary victim by expanding the means by which events can be perceived beyond the limited scope of one’s unaided senses as laid down in Alcock v Chief Constable of the South Yorkshire Police. But if there is neither standing nor any acknowledgement of agency between user and avatar is made, there will undoubtedly remain a potentially dangerous gap in the law.

Purvis believed art should be useful. “I claim that art today is being applied, and should be applied more definitely to human needs, and it is the type of art which helps to develop the nation’s prosperity”. It was profoundly interconnected with “life and progress”. Such an aesthetic perspective is undoubtedly a mantra that the entrepreneurial artists building the metaverse embody. But as we seek art’s utility, increasingly integrating it into our lives, we should also consider its rights and wrongs.

William Holmes is a future trainee solicitor at a magic circle law firm.

The post The rights and wrongs of life in the metaverse appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-rights-and-wrongs-of-life-in-the-metaverse/feed/ 3
Put your trust in computational antitrust https://www.legalcheek.com/lc-journal-posts/put-your-trust-in-computational-antitrust/ https://www.legalcheek.com/lc-journal-posts/put-your-trust-in-computational-antitrust/#comments Thu, 03 Feb 2022 10:33:04 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=172040 Law student Tanzeel ur Rehman explains how AI is being used to revolutionise competition laws

The post Put your trust in computational antitrust appeared first on Legal Cheek.

]]>
Law student Tanzeel ur Rehman explains how AI is being used to revolutionise competition laws

Insidiously intercepting a trader en-route to the market, buying up his goods and then inflating the price, was once considered to be one of the most heinous crimes known to law. “Forestalling” as it was called, also became the beginnings of antitrust or competition law. By enforcing the ‘King’s peace’, this offence was punishable with a heavy fine, forfeiture and some humiliating time in the pillory.

A millennium later, anticompetitive practices are not as simple as the Anglo-Saxons perpetrated. This is the age of Big Data and AI, paving way for more subtle and evasive forms of monopolistic conduct. The complexity of a digital marketplace has given rise to technologically facilitated anticompetitive techniques. Antitrust enforcement is now faced with the dilemma of playing catch-up to rapidly fluctuating business behaviour.

Dynamic-pricing by “algorithmic collusion” is one such example. In 2011, Amazon’s algorithmic pricing used by two booksellers had comically resulted in skyrocketing the price of a used book to nearly $24 million. In 2015, Uber was charged with monopolistic conduct for using its price-surging algorithms. Although exonerated of the charges due to lack of evidence (of human collaboration), the District Judge highlighted that: “The advancement of technological means for the orchestration of large-scale price fixing-conspiracies need not leave antitrust law behind”. In the same year, an art dealer pleaded guilty of colluding with other dealers to fix the prices of artworks on Amazon with the help of dynamic-pricing algorithms. Perfect price discrimination, once considered an impossibility, is now a reality. It is evident that the laws and enforcement techniques created to control the monopolies of the industrial age are struggling to keep pace with the information age.

The question, then, that is begging to be asked is whether computational law is the future of antitrust? Let’s analyse. Computational methods are already being used to detect, analyse, and remedy the increasingly dynamic and complex nature of modern antitrust practices. In 2017, mechanised legal analysis was employed by the European Commission to study 1.7 billion search queries in the Google Shopping case. In 2018, the Commission used similar tools to examine 2.7 million documents in the Bayer-Monsanto Merger. This nullifies the well-documented historical mismatch between “law time” and a market’s “real time”. This also presents a strong case that antitrust regulators can better analyse the prevailing market practices by employing computational tools. For example, The American Federal Trade Commission (FTC) is using a software called ‘Relativity’ in order to analyse companies’ internal communications for the purpose of identifying monopolistic conduct. A step further, application programming interfaces (APIs) can play a significant role in creating channels for transfer of data between companies and regulators.

Want to write for the Legal Cheek Journal?

Find out more

Merger control is another dimension of antitrust regulation where computational tools have useful applications. Analysis of vast datasets is the backbone of merger analysis. A persistent problem is that companies are in control of the data being sent to regulators. In both the DuPont and WhatsApp cases, the EU Commission highlighted that the parties were guilty of withholding or providing misleading data in the investigations. APIs could fix this by establishing systemised communication links between companies and regulators in real-time. The use of ML (machine learning) and AI, in auditing millions of documents, enables “finding the needles in these haystacks”. In addition to this, blockchain could be used to create impenetrable databases, ensuring integrity.

Judges and regulators are often faced with doctrinal questions pertinent to anticompetitive behavior. Computational legal analysis (CLA) can be utilised to unravel existing patterns in judicial decisions, contracts, constitutions and existing legislation. A perfect example is the use of aggregated modelling to analyse linguistic patterns in Harvard’s Caselaw Access Project (CAP). Such modelling can help create topic clusters that revolve around specific antitrust doctrines for example, predatory pricing or shifting market power. This can be helpful to both courts and regulators adjudicating unique antitrust cases or investigations. By providing context, trends and connections between various doctrines, such modelling has practical utility in tackling complex or novel scenarios.

Computational tools have interesting implications on the current framework of merger reviews. More specifically, they can be of great use to predict “killer acquisitions”. The current law requires the adjudicator to use a combination of precedent and guesswork when forecasting a killer acquisition. Modern ML and AI tools can be of greater assistance, in order to reach a more accurate prediction. The use of autoencoders to assess dynamic market environments is a very suitable approach. The stacking of multiple autoencoders (for example, embedding, translation, and detection) can help identify fact patterns. Iterative processes can be used to converge towards an optimal prediction vis-a-vis intervention.

Forestalling became an obsolete offence more than a century ago, but the monopolistic behavior it embodied, has now transformed into more creative and sophisticated ways. It would not be incorrect to say that the antitrust policies of today and the ‘Kings’ peace’ of the bygone days, both share a common sentimentality of consumer welfare. In this digital age, being forewarned is forearmed. So, unless you are willing to pay millions of dollars on Amazon for a used book, or twice the fare for an Uber ride, it’s about time you put your trust in computational antitrust.

Tanzeel ur Rehman is a second year law student at the University of Sindh, Pakistan.

The post Put your trust in computational antitrust appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/put-your-trust-in-computational-antitrust/feed/ 3
The Binance backlash https://www.legalcheek.com/lc-journal-posts/the-binance-backlash/ https://www.legalcheek.com/lc-journal-posts/the-binance-backlash/#respond Wed, 20 Oct 2021 09:59:24 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=168625 Durham University law student Jamie Campbell looks at the potential dangers of crypto derivatives

The post The Binance backlash appeared first on Legal Cheek.

]]>
Durham University law student Jamie Campbell looks at the potential dangers of crypto derivatives

Since the release of Bitcoin on 9 January 2009, the cryptocurrency market has steadily grown to be valued at over £1.5 trillion. While the median holding of investors in 2021 is relatively low at only £300, cryptocurrency investors, traders and enthusiasts have been able to speculate and increase their returns by utilising risky cryptocurrency derivatives. For those that don’t know, a derivative is a “contract between two or more parties whose value is based on an agreed-upon underlying financial asset”.

The first kinds of derivatives for cryptocurrency were rudimentary and conducted on a small scale. For example, early marketplaces allowed traders to employ arbitrage to buy digital currency with spot and then sell futures contracts where a premium was present. This allowed traders to effectively hedge against volatility and secure a price for their Bitcoin. Typically, this kind of service is provided by the traditional banking systems. However, as cryptocurrency is supposedly ‘decentralised’, online exchanges and marketplaces were able to set up these systems themselves, allowing would-be investors to jump right in to trade complex financial instruments.

In 2016, the derivative platform BitMex invented the Bitcoin perpetual swap/future, a new kind of instrument without the expiry date and price deviations from the underlying assets associated with futures contracts. In 2017, CME group launched the first Bitcoin futures contract. Eventually, prime brokers such as CMC Markets Connect, Advanced Markets and B2Broker have launched crypto CFD (contract for differences) offerings. Since 2017, much has changed in the crypto derivative space. In short, during the bull run of 2017 and then with the proceeding bear market, trading in crypto derivatives increased significantly as profits could be made even when prices tumbled. This trend of the derivatives market exceeding the spot markets has continued with derivatives trading exceeding spot trading when prices fall.

An introduction to Binance

More recently, Binance, one of the leading exchanges in the crypto realm, has recently come under fire from regulators around the globe for a plethora of issues. Primarily, regulators and commentators have cited Binance’s promotion of perpetual crypto futures to retail investors unqualified and ignorant of the risks involved. For context, Binance’s influence in both the spot (money paid in cash) and derivatives markets remains unparalleled, even after the regulatory barrage it has faced. In terms of trade volume and open interest, it consistently places as the number one exchange globally. This fact is attributed to its high trust score and the sheer number of coins and pairs it makes available to consumers. In the Financial Conduct Authority (FCA)’s Cryptoasset Consumer Research paper 2020, Binance was found to be the second most used exchange by UK consumers. I have personally used Binance and will likely keep doing so, but as I will explain, there is a ‘correct’ and ‘incorrect’ way to use an exchange’s services, and customers should know the risks involved in doing so.

Want to write for the Legal Cheek Journal?

Find out more

FCA regulation on cryptocurrency and Binance crackdown

There is no doubt that Binance offers its consumers a wide array of features and choices for investing and trading in the world of crypto. For the most part, its consumers are generally satisfied and happy with the services it provides — as is shown by its dominance in the market. Notwithstanding this, in October 2020, the FCA banned the sale of crypto derivatives to UK retail customers. During Binance’s application process under regulation 57 of the 2017 Money Laundering Regulations to become a registered cryptoasset business — with plans on launching a UK based cryptoasset exchange under the trading name Binance.UK. — the FCA discovered that Binance was offering customers crypto derivative products on the Binance.com website. Additionally, these products were easily accessible with no barrier to entry for UK based consumers.

In response to this, in June 2021, the FCA issued a supervisory notice on Binance Markets Limited, ordering it to stop all regulated activities in Britain as well as other strict requirements. This is said to be “one of the most significant moves any global regulator has made against Binance“. Consequently, this prompted Barclays, Clear Junction, Santander, NatWest and HSBC to stop allowing payments to Binance. The impact this had on the crypto markets was so great that Binance rivals felt a boost to the number of users on their platforms.

In other jurisdictions, Binance has also faced regulatory push-back. For example, in April, BaFin (Germany’s financial regulator) issued a warning to Binance that they risk being fined for offering stock tokens (securities-tracking digital tokens) as they failed to provide the necessary prospectuses as is required under EU securities law. Proceeding this, in July, the Italian Consob (Italian Securities and Exchange Commission) issued a warning against Binance offering of stock tokens and other derivatives products. A day after the Consob’s notice, Binance announced it would suspend its services related to the exchange of stock tokens. However, unlike with stock tokens, Binance has not removed derivatives trading on its platform, and UK customers can still access crypto derivatives through the Binance.com website.

Although Binance users in the UK can still trade on Binance, this may change with the end of the FCA’s Temporary Registration Regime (TRR) for cryptoasset businesses. The TRR aims to allow currently operating cryptoasset firms to continue operations while under a process of assessment by the FCA. The purpose being the prevention of money laundering, counter-terrorist financing, and to ensure consumer protection. The deadline for the TRR is on 31 March 2022. As such, any firms not registered may cease to be allowed to remain in operation with its end. Although a Binance spokesperson has stated: “We take a collaborative approach in working with regulators and we take our compliance obligations very seriously. We are actively keeping abreast of changing policies, rules and laws in this new space.” They have so far not commented on whether the company will register under the FCA’s requirements.

Why regulate?

You may be wondering why crypto derivatives need regulating at all. For this article, I want people to be aware of the risks to mitigate potential losses; consumer protection is chiefly necessary. In the UK, CFDs are among the most common derivatives used by investors and speculators to trade underlying securities. These typically involve leverage, and so incur the risk of margin calls. Resultingly, with the increased risk to consumers, the FCA has imposed permanent restrictions on selling CFDs and CFD-like options to retail consumers. Some key restrictions include limiting leverage (to 30x) and providing a standardised risk warning, whereby firms are required to inform customers of how many of their retail clients make a loss. These restrictions are in place for good reasons; primarily because CFD firms have aggressively marketed their products to the public, which are not ‘appropriate for them’. This is exemplified by FCA research that revealed that 82% of CFD customers lose money. Unlike traditional CFDs, with perpetual crypto futures, regulation is hardly enforced and ignored. For example, using as much as 125x leverage, one customer found herself down more than $250,000 as she attempted to keep her positions from being liquidated in a truly devasting story. Although more experienced investors may laugh or feel no pity at her apparent ignorance, it is a case study that genuinely exemplifies the current problems in the space.

What is clear is that regulators globally are having a hard time enforcing any action in the cryptosphere. So Binance is likely here to stay, and even if it disappears, customers will continue to find a way to gamble away their life savings. One thing is for sure, regulatory pressure is mounting, and as time progresses, the crypto world and the real world will become more intertwined.

A lesson to be learnt

Typically, the complexity of these products is not understood by retail traders. Moreover, even if they know the risks, they likely do not know how to properly utilise derivatives as part of a proper trading strategy, as would a financial professional (usually for risk management). Instead, they are just gambling on market movements. Worse even is when ‘investors’ are gambling away their life savings, utilising lines of credit and short-term loans. I have personally lost money trading forex, indices, and crypto-based derivatives, although a controlled amount and advocate extensively against this type of behaviour — as tempting as it is.

My advice, to those resolute on trying their hand at using crypto derivatives is to only gamble away what you are prepared to lose and understand that you will most likely lose most, if not all, your money. The odds will forever be stacked against you. So instead, if you are interested in crypto and want to support your favourite projects, purchase the underlying assets themselves and then move them off exchanges onto your own wallet. This way, you will have control of the assets and will not be subject to the risk of margin calls and liquidation. This way, when we next see gains in the crypto markets, you may celebrate without the devastation of losing everything and having to re-mortgage your home.

Jamie Campbell is a final year law student at Durham University.

This Journal is in no way intended to amount to financial and/or investment advice.

The post The Binance backlash appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-binance-backlash/feed/ 0
Why the Online Safety Bill doesn’t go far enough https://www.legalcheek.com/lc-journal-posts/why-the-online-safety-bill-doesnt-go-far-enough/ Thu, 12 Aug 2021 09:24:40 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=165931 Cambridge University law student Nathan Silver assesses the limitations of the new draft legislation

The post Why the Online Safety Bill doesn’t go far enough appeared first on Legal Cheek.

]]>
Cambridge University law student Nathan Silver assesses the limitations of the new draft legislation

In the aftermath of the Euro 2020 final between England and Italy, three young Black footballers — Marcus Rashford, Jadon Sancho and Bukayo Saka — received appalling online racial abuse.

Unfortunately, this is a common occurrence. While online abuse has been a problem since the internet was created, it seems to have intensified recently, with Black footballers often the victims. This was the most high-profile incident yet, and has resulted in a petition which has amassed over one million signatures, supporting the banning of fans who post racist material from attending games for life.

More recently, Love Island contestant Kaz Kamwi received racist abuse on Instagram when viewers disagreed with her decisions on the show. Her family, controlling the account whilst Kamwi is in the villa, had to release a statement reminding people to be kind.

While the instigators of the abuse are of course to blame, social media platforms could do more. Their main strategy to combat hate crime seems to be the removal of racist content from their sites, with Facebook posting in the aftermath of the final that it “quickly removed comments and accounts directing abuse at England’s footballers”. Twitter removed over 1,000 posts and blocked accounts sharing hateful content within 24 hours. Even so, this is not always successful. Visiting one of the player’s pages shortly after the game revealed swathes of racist emojis and comments despite the site’s best efforts.

The crux of the issue is the availability of anonymous accounts, which freely allow individuals to abuse without fear of consequence. Most sites, such as Facebook and Instagram, not only allow the creation of anonymous accounts, but have no mechanism to prevent the owner from creating a new one, after being removed from the site for racism. To sign up to Instagram one must only provide an email or phone number. Offenders, who have been flagged by the sites and removed, can easily continue to abuse online through a different account simply by creating a new email address, or using a different number. Even if an offender’s IP address (a unique address that identifies a device on the internet or local network), is blocked, they can easily change it and create a new account.

The UK government has recognised the need for online platforms to better regulate their sites. It published the Internet Safety Strategy Green Paper in October 2017, which aimed to “ensure Britain is the safest place in the world to be online”. This evolved into the Online Harms White Paper, before finding its final form in the draft Online Safety Bill (the bill). The bill is currently being scrutinised by the Joint Committee on the bill, required to report its findings by 10 December 2021 with the aim of the bill becoming law in 2023 or thereabouts.

The bill seeks to appoint Ofcom as an independent regulator of certain “regulated services”, meaning a regulated “user-to-user” or “search” service. This regulator will impose duties of care on providers of regulated services, such as, but not limited to, risk, safety, and record-keeping duties. Ofcom will have the power to fine companies up to £18 million pounds, or 10% of their annual turnover (whichever is higher), and even block access to their sites by users, if they fail to fulfil their duties.

Want to write for the Legal Cheek Journal?

Find out more

The bill is ambitious, and the UK will become the first country to regulate social media platforms in this way if it becomes law. Oliver Dowden, secretary of state for digital, culture, media and sport, claims that the bill will “crackdown on racist abuse on social media”. But online racism appears to be an afterthought. The Online Harms White Paper was criticised for covering a “disparate array of ills” in a recent article. It aimed to tackle hate crime alongside child sexual exploitation and abuse, terrorism, sale of illegal drugs and weapons, as well as harmful content towards children and legal but harmful content. The White Paper presented a risk of racism being forgotten among other online harms. But at least it mentioned hate crime. The Draft Safety Bill does not mention the words ‘race’, ‘racism’ or ‘hate crime’ at all.

Instead, it is subsumed under a general ‘duty to protect adults’, which requires platforms to specify how they will deal with harmful content. The bill also imposes, under “reporting and redress duties”, a duty to allow the easy reporting of content the platform considers harmful, which provides for action against offenders, and is easy to access. But these provisions are incredibly vague, failing to detail specifically how “huge volumes of racism, misogyny, anti-Semitism… will be addressed”. It is possible that, after the review by the Joint Committee, concrete and clear steps to combat racism will be published. But it is concerning that a bill which has been presented as a tool for tackling online racism fails to mention it at all.

The bill also fails to tackle the issue of anonymous accounts. Nicola Roberts, former Girls Aloud star, who herself has suffered online abuse, refused to endorse the bill. Instead she criticised it, claiming that it had “failed to combat the problem of someone’s account being taken down only for them to start a new one under a different name”. And she is right; the bill fails to address the root of the problem. As Roberts puts it, the bill is seeking to “chase the rabbit” rather than “fill the hole”.

As much as social media companies can better inform their users, improve the mechanisms for reporting abuse, and remove abusive messages more quickly, unless anonymous accounts are tackled head on, offenders remain able to abuse others without fear of consequence. To be effective, the bill must focus more on preventing racist abuse in the first place, rather than putting in better mechanisms for reporting accounts and removing comments after the event. Requiring users to sign up using their ID will help with prevention. Individuals will face the real prospect of an employer, or perhaps parents (in the case of children) being contacted, a potential ban from using the platform for life, or even criminal proceedings, rather than a slap on the wrist.

Some interested parties worry about the privacy issues or potential to target vulnerable people, including children, connected with any loss of anonymity. But it is worth noting that a user would not be required to have their real name posted online. Rather, there could simply be the requirement for the social media platform to have access to a user’s real name in the event of an offence, meaning any worries about lack of privacy or children’s safety online would be unfounded. There remain data protection issues associated with the handing over of personal details to a social media company, which require addressing. But this is a small price to pay to better protect users of these platforms from online abuse and racism.

The bill’s motives are clearly to be applauded. Ending the self-regulation of social media companies is certainly a step in the right direction to making the internet a safer place. But if the bill is to seriously tackle the specific issue of online racism, it must highlight it, to ensure it does not become forgotten among a sea of other aims, and it must commit to imposing real repercussions on offenders by ending anonymous accounts.

Nathan Silver a second year law student at Magdalene College, Cambridge.

The post Why the Online Safety Bill doesn’t go far enough appeared first on Legal Cheek.

]]>
AI in healthcare: a legal and ethical balancing act https://www.legalcheek.com/lc-journal-posts/ai-in-healthcare-a-legal-and-ethical-balancing-act/ https://www.legalcheek.com/lc-journal-posts/ai-in-healthcare-a-legal-and-ethical-balancing-act/#respond Fri, 06 Aug 2021 09:15:44 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=165701 Government paralegal Marie Le Frapper considers how best to strike the balance between providing users adequate protection and encouraging growth and investment

The post AI in healthcare: a legal and ethical balancing act appeared first on Legal Cheek.

]]>
Paralegal Marie Le Frapper evaluates the different regulatory approaches to the use of artificial intelligence in healthcare to see which strikes the best balance between providing users adequate protection and encouraging growth and investment

Artificial intelligence (AI) is undoubtedly a hot topic. It is also one of the most high-profile and controversial areas of technological development. Whilst we are still far from robots taking over the earth, its use has become much more common thanks to the improvement of analytical techniques and the increased availability of data.

The healthcare sector is one in which the benefits of AI are undeniable as this technology can do what humanity can do already in a more efficient way and much more, such as finding links in genetic codes. However, its use raises several legal and ethical questions that we need to address. This is why governments and international organisations are now focusing on creating an AI-friendly regulatory framework.

As with any new technological development, AI raises many questions that governments and populations need to grapple with. Work on the subject is ongoing at all levels including in the UK and the European Union. For many of us, AI and algorithms are a very opaque science which we are told to believe is used for the benefit of us all. However, there is scope for the exact opposite to happen as well. Therefore, in 2020, the High-Level Expert Group on AI appointed by the European Commission set out seven requirements regarded as essential to the ethical use of AI with a focus on transparency, safety, fairness and accountability.

Data is what fuels AI. Without it, machines would not be able to learn how to ‘think’. This is why the protection of patients’ medical data is paramount and is now an industry and government priority around the world. However, the health sector, in particular, is at high risk of cyber threats because of the sensitive nature of patients’ medical data.

Since this data is at the forefront of scientific and technological innovation, with the life sciences sector being worth several billion, it is also a very attractive target to cybercriminals. For example, Ireland’s Department of Health and Health Service Executive were earlier this year the target of a cyberattack. It was a direct assault on critical infrastructure and resulted in cancellations of non-emergency procedures and delays in treatments. Similarly, in 2017, the NHS was disrupted by the ‘WannaCry’ ransomware. Ensuring that both public and private healthcare providers have the tools to protect patients’ data will increase confidence and lead to many more people being willing to share their medical information to organisations creating AI so that there is a significant enough database available for machine learning.

The framework surrounding data protection is ever-changing. The Court of Justice of the European Union decided last year in the Schrems II case to invalidate the Privacy Shield decision granting adequacy to the US. This ruling has a significant impact on cross-Atlantic trade and data sharing whilst casting a shadow over the UK as the end of the transition period approached. In the UK, the Supreme Court is due to rule on the possibility of bringing opt-out class actions in data breaches cases in the Lloyd v Google case. As healthcare providers and bodies are targets of choice, greater protection will be required because of the risk of facing potentially expensive claims.

Greater public confidence also leads to the supply of information coming from more diverse populations. We already know that some diseases do not manifest themselves in the same way depending on the ethnic background of the patient. A very simple example can be seen in an AI tool that was created to detect cancerous moles. In its early stages of development, the AI will have been trained on a database mostly composed of white skin images, meaning it will be less likely to find such cancerous patterns on darker skins.

Want to write for the Legal Cheek Journal?

Find out more

Another issue that arises out of the use of AI is that of discrimination. The Dutch government used an algorithm named SyRI to detect possible social welfare fraud based on criteria such as the amount of running water used by a household. However, freedom of information requests revealed that SyRI was predominantly used in low-income neighbourhoods, exacerbating biases. Eventually, the Dutch court of The Hague ruled SyRI violated article 8 of the European Convention on Human Rights protecting the right to respect for private and family life. The benefits created by AI should not be obscured by biased machine learning that can be corrected by proper human oversight.

As AI is being democratised, and the above challenges become more obvious, governments are focusing on creating a framework striking a balance between building an environment that not only is welcoming for businesses in this area such as life sciences organisations and pharmaceutical companies but that also offers sufficient protection for our data.

The cash injections and investments made during the pandemic in the life sciences sector are due to remain as the Prime Minister seeks to strengthen the role of the UK as a leading country in this sector. Since leaving the European Union, the UK government has announced plan to invest £14.9 billion in the 2021/2022 year, rising to £22 billion by 2025 on research and development in the life sciences industry with a focus on technology.

In a draft policy paper released on 22 June 2021 entitled ‘Data saves lives: reshaping health and social care with data’, the Department for Health and Social Care set out its plan for the future in a moment in time where our health data is key to reopening society. Chapters 5, 6 and 7 of this policy paper focus on empowering researchers with the data they need to develop life-saving treatments, developing the right technical infrastructure, and helping developers and innovators to improve health and care with a specific focus on encouraging AI innovations as well as creating a clear and understandable AI regulatory framework. For example, amendments were made to the government guidance on AI procurement encouraging NHS organisations to become stronger buyers and a commitment was taken to develop unified standards for the efficacy and safety testing of AI solutions closely working with the Medicine and Healthcare products Regulatory Agency and the National Institute for Health and Care Excellence by 2023.

Another initiative is the AI in Health and Care Awards. During its first round, there were 42 winners including companies such as Kheiron Medical Technologies for MIA “Mammography Intelligent Assessment”. MIA is a deep learning software that was developed to solve challenges in the NHS Breast Screening Programme such as reducing missed diagnoses and tackling delays that put women’s lives at risk. The use of such software has a significant impact on public health, saving lives thanks to early diagnosis and reducing the cost of treatment that the NHS offers. Indeed, researches have shown that about 20% of biopsies are performed unnecessarily.

Although the UK is no longer bound by EU law, developments in this sector happening on the continent need to be kept in sight. In April 2021, the European Commission published draft regulation on the harmonisation of the rules on AI. Although it takes a risk-based approach to AI, it is worth noting that the draft regulation prohibits the use of AI for social scoring by public authorities and real-time facial recognition (as in the 2020 Bridges v South Wales Police case). Maximising resources and coordinating investments is also a critical component of the European Commission’s strategy. Under the Digital Europe and Horizon Europe programmes, the Commission also intends to invest €1 billion per year in AI.

Furthermore, now that the UK has been granted adequacy, which means the EU recognises the level of protection afforded to personal data in the UK is comparable to that afforded by EU legislation, data can continue to flow between the two parties and significant divergences are unlikely to arise in the near future. Similar to the Covax initiatives, greater collaborations and the development of AI taught using databases composed of both EU and UK data would not be surprising.

Governments and stakeholders are now looking at AI as the future of healthcare. Although its use implies many ethical questions, the benefits are likely to outweigh the risks provided the regulatory framework surrounding it offers both flexibility for innovators and stringent requirements protecting our medical data. The two approaches taken by the UK and the EU seem to focus on the same relatively non-contentious criteria. However, it seems that the UK government is more willing to invest in the sector, surfing on the reputation of the country in genome sequencing. The upcoming developments in that field should be kept in sight for anyone with an interest in new technology, healthcare and data protection as they promise exciting discussions.

Marie Le Frapper is a paralegal working for the Government Legal Department. She graduated with a degree in law and French law from the University of London and now hopes to secure a training contract to qualify as a solicitor.

The post AI in healthcare: a legal and ethical balancing act appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-in-healthcare-a-legal-and-ethical-balancing-act/feed/ 0
Why Henry III would have regulated crypto communities better than us https://www.legalcheek.com/lc-journal-posts/why-henry-iii-would-have-regulated-crypto-communities-better-than-us/ https://www.legalcheek.com/lc-journal-posts/why-henry-iii-would-have-regulated-crypto-communities-better-than-us/#respond Fri, 04 Jun 2021 09:13:53 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=163483 Future magic circle trainee William Holmes compares medieval and blockchain communities

The post Why Henry III would have regulated crypto communities better than us appeared first on Legal Cheek.

]]>
Future magic circle trainee William Holmes compares medieval and blockchain communities

Image of Henry III via Wikimedia Commons

In 1255, King Henry III authorised a writ stating that “villages and communities of villeins [tenant farmers] […] ought to be able to prosecute their pleas and complaints in our courts”. The writ was the Crown’s attempt to fill a void in its legal authority that had emerged as medieval communities increasingly acted as independent entities in the eyes of local law. Hundreds of years later, blockchain technology has created a similar legal void between the state and decentralised communities.

Thus far, this has remained largely unaddressed. Regulators, judges and legislators have primarily been concerned with the categorisation of digital tokens (like cryptocurrencies). However, it is decentralised communities — comprised of developers (individuals who build and propose changes to a blockchain), miners (individuals who validate changes to a blockchain) and a variety of other participants — rather than the tokens these communities use to finance their objectives, who determine what a blockchain does.

A handful of states have, however, begun to recognise decentralised organisations as legal entities. The most complete proposal is Malta’s Innovative Technology Arrangements and Services Act which allows decentralised organisations to obtain legal recognition by registering with a regulator. But, like King Henry III’s writ, legislative attempts by a state to remedy the lack of control it has over a community quickly become obsolete and meaningless. Such measures fail to establish a more flexible legal principle that can extend the rule of law to any decentralised community, regardless of whether it admits it is a corporation or not.

Concession and “real entity” theory

There is a good reason why we have thus far failed to establish such a principle. Our current notion of corporate legal personhood can only understand a corporation in relation to the state; a group must have legal personhood bestowed upon it by the state. Unsurprisingly, this tradition, known as concession theory, has failed to acknowledge the fact that blockchain communities can act like autonomous legal entities, regardless of whether they are recognised by the state or not. The consequence of concession theory is that certain groups are, by default, ignored by the law until the legal ramifications are so great that the state is forced to take action.

A more suitable approach for blockchain communities is the solution developed by medieval societies in the 12th and 13th centuries, known as “real entity” theory. As Otto von Gierke points out, in medieval times, the functional and social existence of a group was sufficient for it to be treated as a corporate legal person. A corporation was “a living organism and a real person, with body and members and a will of its own […] it wills and acts by the men who are its organs as a man wills and acts by brain, mouth and hand”.

In light of this, although King Henry III’s 1255 writ conceding legal status to medieval English communities appears to vindicate concession theory, quite the opposite is true. Henry’s writ was merely a formality; these medieval communities had already been functioning as legal entities since the end of the 12th century. From at least 1199, there is evidence of villages, towns, parishes and guilds entering into contracts, assuming liabilities and resolving disputes in court as groups. Shared cultures and identities allowed these groups to so readily become “communities at law”: they were real social organisms. And the Crown, which had little control over these communities’ affairs, was all too happy to accept their legal independence (provided that they paid their taxes and didn’t seriously challenge the authority of the state).

Want to write for the Legal Cheek Journal?

Find out more

Booze and the badger dance

I would argue that blockchain communities can be considered real social organisms, akin to medieval communities. Owing to its decentralised design, community is inherent to blockchain’s characteristics. A single individual cannot run a blockchain on their own, but is forced to rely on others. Interestingly, recent research comparing the levels of Bitcoin penetration in different countries found that the higher the nation’s individualism, the lower the Bitcoin penetration. This implies that communal ideology (as opposed to individualism) is a core feature of these groups’ identities and functions.

There are also some more obvious indicators. Medieval guilds, for example, famously manifested their shared culture and identity in elaborate oaths of fraternity, debauched feasts and boozy initiations that were used to form strong social bonds between members. Similar traditions can be found in blockchain communities. The Ethereum community, for example, can be identified by their famous badger dance (here performed by the community’s creator Vitalik Buterin). As one member of the Ethereum blockchain community explained, “I think the dancing in itself speaks so much about Ethereum’s cultural values related to freedom, creative expression, fun, unconventionality, and even the desire for collective unity to some extent.”

A world of neighbours

Given that decentralised communities appear to meet the standard required for “real entity” theory to function, the flexibility offered by “real entity” theory would fill the legal void surrounding decentralised communities, as it did in medieval societies. But, should this medieval legal practice be applied to blockchains? And what are its limitations?

First, we should question how these communities regulate internal members’ relations. Both medieval and blockchain communities used a combination of mutual dependence and dispute resolution mechanisms to achieve this. Scholars, such as Barbara Hanawalt, have described medieval peasant villages as “a world of neighbours” in which durable relationships and mutual dependence induced cooperation amongst members in most cases. If cooperation broke down, many groups had their own tribunals that varied from community to community in order to resolve disputes.

Similarly, developers and miners are incentivised to cooperate by game theory mechanisms (the risk of a hard fork and the benefits of building a network with a large community) and the shared culture and identity described above. Furthermore, some systems have also developed dispute resolution mechanisms. For example, the Graph protocol allows members to submit disputes that are either satisfactorily resolved on-chain or are decided by an arbitrator who is part of the community. The effectiveness of these mechanisms is a test of these communities; the break-down of a group’s social cohesiveness would make “real-entity” theory ineffective and require the state to step in. So, whilst groups are capable of internal regulation, there is always a risk some communities will break down, to the detriment of its members.

Second, we should question the effectiveness of the systems of group representation. Like modern corporations, medieval communities used representatives. A key difference, however, was that there was little concern over who was chosen and whether the procedure was fair and democratic. So cohesive were many medieval communities, it appears that it was a member’s obligation to speak for the group. This meant that a group’s leading men (on the grounds of their expertise or status) would act as representatives, no questions asked. The emphasis was less on procedure and more on the substance of the matter. Blockchains, on the other hand, are designed for democracy. This does not mean, however, that leading figures in the community might not be called upon to represent the community. Therefore, it appears (depending on the norms of the community) that establishing legitimate representatives is possible, but will again test the community’s strength and electoral preferences.

Decentralised communities at first seem to present the state with an alien and irreconcilable issue. But, some reflection on the medieval “communities at law” demonstrates how a more localised system can function. It would, therefore, be fair to assume that a regulatory response to blockchain technology might well have come more naturally to Henry III than today’s heads of states. It remains to be seen whether our more individualistic societies can find a satisfactory mechanism for regulating the community culture that has been reborn by decentralised technology.

William Holmes is a penultimate year student at the University of Bristol studying French, Spanish and Italian. He has a training contract offer with a magic circle law firm.

The post Why Henry III would have regulated crypto communities better than us appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/why-henry-iii-would-have-regulated-crypto-communities-better-than-us/feed/ 0
The future of cryptocurrency https://www.legalcheek.com/lc-journal-posts/the-future-of-cryptocurrency/ https://www.legalcheek.com/lc-journal-posts/the-future-of-cryptocurrency/#respond Tue, 23 Mar 2021 11:44:39 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=161229 Pinsent Masons paralegal Shanelle Mattu tracks the evolution of currency and considers whether crypto assets are the future of money

The post The future of cryptocurrency appeared first on Legal Cheek.

]]>
Pinsent Masons paralegal Shanelle Mattu tracks the evolution of currency and considers whether crypto assets are the future of money

The evolution of currency

Some time ago, people traded goods and services, directly, through a system of bartering. Over time, and in order to speed up and standardise transactions, currencies began to form. The first use of an industrial facility to manufacture coins that could be used as currency was thought to date back to 600 B.C., when the elites of Lydia (modern-day Western Turkey) used stamped silver and gold coins to pay armies. Moving forward through the centuries, we’ve come to rely on fiat currencies (coins, banknotes and credit cards) to purchase goods and services. The 21st century has graced us with yet another currency — the virtual currency, also known as cryptocurrency. In 2009 Bitcoin was the very first cryptocurrency to hit the market, invented by Satoshi Nakamoto, in response to the 2008 financial collapse.

Today, there are over 4,000 types of cryptocurrency circulating the global digital infrastructure. These virtual currencies can be used to buy goods and services, using an online ledger with strong cryptography to secure online transactions, built on top of blockchain technology.

Understanding blockchain

Blockchain can be pretty complex, but just think of a database storing information in blocks, with chains linking these blocks in chronological order. Different types of information can be stored on a blockchain; for example, it is used in land registry to record things such as structural changes to buildings, and “if the property is sold, all the relevant documentation can be transferred to the new owner. Every transaction is traceable, timestamped, and indisputable”. The most common use of blockchain so far has been as a ledger for transactions. Blockchain technology serves as the backbone and enabler of the existence of cryptocurrency.

Dissolution of power

Bitcoin transactions are recorded on a blockchain in a decentralised way — this means that no single person or group has control over the way data is stored, rather, all users collectively retain control. This maintains the integrity of the ledger and prevents people from corrupting the system, quite unlike our traditional currencies, which are controlled by our central banks and the government. This doesn’t mean that decentralised systems are capable of defying any form of manipulation, but it does mean that there is no single point of attack, making it highly hack-resistant.

Because blockchain is decentralised, and not controlled by one single authority, it is feared by central banks and governments, because they cannot manipulate the value of cryptocurrency in the same way they do with our traditional currencies. This means that, if we were to adopt cryptocurrency as our primary form of currency, central banks and governments would not be able to interfere, and therefore could not control the rate of inflation. Furthermore transactions over a blockchain cuts out government tax revenue, and interests from financial institutions. Whilst this sounds appealing there are, understandably, fears and uncertainty towards the nature and uses of cryptocurrencies.

Using cryptocurrency in transactions

Legal tender exclusively refers to fiat money and is recognised as satisfactory payment to extinguish a debt. Although cryptocurrencies do not have a traditional legal tender, there is no reason that cryptocurrency cannot be accepted as a form of payment. Firstly, credit card payments do not have legal tender, yet they are widely accepted as a form of payment; and secondly, Bitcoin is already accepted as a form of payment by Starbucks, Whole Foods and other major retailers. US-based payments start-up, Flexa, believes that “the best way for global commerce to become more efficient and accessible is by bringing cryptocurrency to the masses”. Furthermore, this year Elon Musk’s electric car company, Tesla bought $1.5 billion (£1.1 billion) in Bitcoin. It said it expects to start accepting it as payment in the future. This purchase led to a spike in the value of Bitcoin, demonstrating that an increase in confidence in the crypto markets equals an increase in market value.

Want to write for the Legal Cheek Journal?

Find out more

Imposing regulations on blockchain and cryptocurrency

There is no doubt a huge requirement to educate ourselves on blockchain and cryptocurrencies, so as to prepare ourselves for the opportunities that come with new emerging technologies. Yet, in the last 12 years since cryptocurrency was introduced, it seems the rate at which people are becoming familiar with cryptocurrency is indeed very slow, and it appears that financial authorities and governments around the world are beginning to scrutinise cryptocurrencies, rather than educating people on something that is already taking shape.

In January 2021 the UK’s Financial Conduct Authority (FCA) imposed a regulatory ban on the trading of cryptocurrency derivatives, which only goes to show the scepticism and the lack of control authorities have over cryptocurrencies. Across the Atlantic, towards the end of 2020, the US Securities Exchange Commission (SEC) filed a multi-billion dollar lawsuit against one of the major cryptocurrencies, Ripple Labs (Ripple) and its executives for its selling of XRP, claiming it fails to meet the criteria of a currency, and fits the criteria of an investment contract, and is thus a security. The SEC is now seeking injunctive relief and disgorgement of alleged ill-gotten gains.

It’s worth noting here that the SEC deems the cryptocurrencies — Bitcoin and Ethereum — as currencies, which raises the question over the difference between what is considered a currency and what is considered a security within the crypto space. The SEC argues that Bitcoin is considered a currency because it is decentralised, while XRP is classified as a security since it is controlled by a central authority.

Many believe that the SEC’s “misguided” lawsuit against Ripple will cause further harm to cryptocurrency, and this has proven true. Since the lawsuit was filed, various exchanges began to delist XRP, causing doubt amongst the public in the value of various cryptocurrencies in general. On the other hand it is thought this this lawsuit will bring about the regulation required, and perhaps bolster certainty and public confidence. Recently, on 9 March 2021, it was reported that US lawmakers introduced a bipartisan bill to clarify crypto regulation. Perhaps this bill will provide the public with the much needed clarity over the differences between a currency and a security, so that they can make choices for themselves.

A highly volatile market

When it was first introduced, Bitcoin was worth less than US $1. It is now worth over US $50,000 (£36,000). The surge in price is down to simple supply and demand since there are only 21 million Bitcoins that can be mined. As of 24 February 2021, 18.638 million Bitcoin have been mined, which leaves 2.362 million yet to be introduced into circulation. Adding to the fluctuations in cryptocurrency market are decisions such as those taken by the SEC and FCA which, perhaps unintentionally, provoke more uncertainly in the overall crypto market, resulting in traders making choices based on emotion, and thus leading to increased market volatility and quick liquidity.

What will come of all this?

The interest in crypto was sparked following the 2008 financial crisis, because people lost faith in the existing financial infrastructure. Major events such as WWI, WWII and the 2008 financial crisis, caused devastating and long-lasting effects on the global economy. During the midst of recovery, our livelihoods and economy have taken yet another blow, following the current COVID-19 pandemic. In order to save us from financial burden, central banks are printing more and more money, but we must ask ourselves how much longer this process of quantitative easing can be sustained. Through this process, our traditional currencies are becoming worthless. Today, our money has value, because our government tells us so, but since the Nixon Shock in the 1970s, the US dollar (the world’s reserve currency), is not backed by gold, or anything of intrinsic value. With an increase in cost of living and a bleak outlook on the recovery of our economy, it is no wonder that we are looking for a golden parachute.

Based on the above, and the current rate of technological advancement, businesses and individuals alike are adapting. Just think of fintech firms, which have already begun to access new technologies which rely on blockchain, in order to stay ahead of competition. Whilst authorities are imposing regulations on cryptocurrency, they recognise that they cannot stop the march of progress, in that “markets and tech are always changing”, and “fintech can be a powerful force for good” (Gensler, 2021). This could mean that we will see blockchain and cryptocurrency become mainstream within the next five to ten years.

Shanelle Mattu is a paralegal at Pinsent Masons in the cyber team. She studied English literature at Westminster University and completed the GDL at The University of Law. She aspires to become a solicitor.

For more insight, check out Pinsent Masons’ articles hub.

The post The future of cryptocurrency appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-future-of-cryptocurrency/feed/ 0
Why Elon Musk’s pigs are a legal headache https://www.legalcheek.com/lc-journal-posts/why-elon-musks-pigs-are-a-legal-headache/ https://www.legalcheek.com/lc-journal-posts/why-elon-musks-pigs-are-a-legal-headache/#respond Wed, 16 Dec 2020 09:18:01 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=156870 Bristol University student and future trainee William Holmes explores the challenges ahead for brain-computer interface (BCI) systems

The post Why Elon Musk’s pigs are a legal headache appeared first on Legal Cheek.

]]>
Bristol University student and future trainee William Holmes explores the challenges ahead for brain-computer interface (BCI) systems

Elon Musk (credit: Duncan.Hull via Wikimedia Commons) and Gertrude

Elon Musk’s pig, Gertrude, looks like any other pig. But the coin-sized chip Musk’s company Neuralink have placed in Gertrude’s brain makes her a key part of a ground-breaking experiment to discover if technology can enable us to do things with thoughts.

The chip is a brain-computer interface (BCI) which picks up neural activity. Musk hopes to decode this neural activity so that it can be understood as instructions for a computer, allowing BCI users to control a computer with their minds. In other words, BCIs can transform a thought into an act.

For many who have lost certain bodily functions, BCI technology is a scientific miracle. The technology has the potential to treat neurological conditions like dementia or Parkinson’s, restore paralysed individual’s ability to control their bodies and even allow the blind to see again. But for prosecutors, judges and policy makers, BCIs are a troubling legal headache.

Proving criminal responsibility for most crimes requires the prosecution to prove both a defendant’s criminal act (actus reus) and intention (mens rea). So, how would this work for a defendant who used a BCI to commit a crime? An act is defined in most legal systems as “a bodily movement” (the quote here is from the US Model Penal Code). But a crime committed using a BCI involves no bodily movement. Nevertheless, if we take a neuroscientific approach, this is not an insurmountable obstacle for a prosecutor.

The chain of causation for a BCI user is as follows. First, the BCI user imagines an act that they want the computer to perform (I shall refer to this as a “mental act”). Second, neural activity is triggered by the mental act that is input for the BCI. Finally, the BCI interprets this neural activity and performs the act. Just as a finger pulls the trigger on a gun, neural activity triggers the BCI. Therefore, the neurons that fire and produce measurable neural activity could plausibly be considered the actus reus in cases involving the use of BCI technology. So, it appears that a legal loophole in prosecuting disembodied acts can be avoided. But at a price.

By finding actus reus in the activity of a defendant’s neurons, we have been forced to expand the law into the mental sphere. This is a sphere which, in keeping with the Roman law maxim that “nobody shall be punished for thoughts” (cogitationis poenam nemo patitur), is not regulated by the law. In the UK, this doctrine is a right enshrined in article 9 of the Human Rights Act 1998. Given the repercussions for our freedom of thought, is it acceptable to regulate BCIs? If not, can legal systems that only regulate outward behaviour properly maintain the rule of law in BCI cases?

The middle ground between a BCI Wild West and criminalising thoughts is granting BCI users the ability to waive their right to freedom of thought. For those that this technology offers the most, for example tetraplegics, this may well be a right they are happy to waive. Should an individual be allowed to take such a decision? Legislators would have to step in to clarify who can use BCIs and judges would have to recognise implied consent from BCI users to waive this right to freedom of thought.

Want to write for the Legal Cheek Journal?

Find out more

When deciding this, we must not ignore how significant this expansion of government regulation would be. For the first time, certain thoughts or mental acts would be outlawed. As a result, law-abiding BCI users will be forced to think before they think, regulating themselves in an unprecedented way. This is the immediate ‘legal headache’: BCIs force us to consider the merits of breaking a human right that is fundamental to democratic society and individual liberty in order to avoid criminal loopholes.

There is, however, a second long-term ‘legal headache’. Using the brain’s neurons to establish responsibility forces us to reconsider how we determine responsibility more broadly. How we attribute responsibility is (and has always been) a social decision. In some societies in the past, if an act was compelled or inspired by a divine force, then the law did not deem the individual responsible. In societies where an artist considered the muses responsible for their work, an acceptable waiver of responsibility was the excuse that “God made me do it”.

Today, we consider acting people to be responsible. But this could change in the future, especially if BCIs help to promote neuroscience to the forefront of the legal system. A recent example that highlights the influence of neuroscience on policy is Holland’s adolescent criminal law that came into force in 2014. This law allows those aged between 16 and 22 to be tried as an adult or as a juvenile at the court’s discretion. The underlying rationale is based on neuroscience: Holland’s new system hopes to take into consideration the mental development of defendants when sentencing them. This represents a social shift that sees the brain as the responsible agent.

This shift, which was famously critiqued as “brain overclaim syndrome” by Stephen J. Morse, could have some troubling consequences. The data recorded by BCIs (especially from the amygdala which regulates emotion) offers temptingly persuasive evidence for a defendant’s mens rea and mental state. The question for judges is whether this data is admissible evidence.

A neurocentric legal culture would encourage a judge to admit such evidence. If admissible, a high level of cross-examination is vital to ensure that there is clarity around neuroscience’s technical and interpretive limits. For example, there is evidence that factors like parenting and socio-economic status change the way the amygdala and prefrontal cortex function. The fact that neuroscientific technology is overwhelmingly tested on students from Western Educated Industrialised Rich and Democratic (WEIRD) population means that there is a possible bias in interpreting neuroscientific information. Unquestioned, these limitations allow lawyers to cast uncertain aspersions based on competing expert testimony which could lead juries to jump to false conclusions.

Furthermore, if the brain is considered responsible for criminality, then a reform of the penal system is implicit. The chances of recidivism and the methods with which guilty prisoners are treated — be it regenerative or punitive — would no longer be based on human nature and character. Instead, neuroscience would nuance our understanding of criminality and how to treat it. And the result might not be dissimilar to the Ludovico Technique, a type of psychological treatment that Antony Burgess portrays in his dystopian novel A Clockwork Orange.

Gertrude the pig is just the start of a technology that could rewire the legal norms of responsibility and radically change the legal concept of action. In light of this, policy makers and judges must prepare the criminal justice system for advent of BCIs. There is currently no regulation that is specific to BCI technology in the UK, as the British government acknowledged in a report published in January 2020. That is because the technology is still being developed and there are no clear solutions yet. But one thing is for sure: Elon Musk’s pigs promise to be a complex legal headache for scholars, lawyers, judges and legislators for decades to come.

William Holmes is a penultimate year student at the University of Bristol studying French, Spanish and Italian. He has a training contract offer with a magic circle law firm.

The post Why Elon Musk’s pigs are a legal headache appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/why-elon-musks-pigs-are-a-legal-headache/feed/ 0
Why the law should treat algorithms like murderous Greek statues https://www.legalcheek.com/lc-journal-posts/why-the-law-should-treat-algorithms-like-murderous-greek-statues/ https://www.legalcheek.com/lc-journal-posts/why-the-law-should-treat-algorithms-like-murderous-greek-statues/#respond Thu, 17 Sep 2020 10:32:32 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=153007 Future magic circle trainee William Holmes considers whether 'mutant algorithms' should have their day in court, following this summer's A-Level exam results fiasco

The post Why the law should treat algorithms like murderous Greek statues appeared first on Legal Cheek.

]]>
Future magic circle trainee William Holmes considers whether ‘mutant algorithms’ should have their day in court, following this summer’s A-Level exam results fiasco

Can a statue be responsible for murder? Ancient Greek lawyers successfully persuaded jurors that it could be.

In 5th century BC, a bronze statue of the star heavyweight boxer Theagenes of Thasos (known as the “son of Hercules” by his fans) was torn down by some of his rivals. As the statue fell, it killed one of the vandals. The victim’s sons took the case to court in pursuit of justice against the murderous statue. Under Draco’s code, a set of ancient Greek laws famed for their severity, even inanimate objects could feel the sharp edge of the law. As a result, Theagenes’s statue was found guilty and thrown into the sea.

The practice of allocating legal responsibility to objects did not end with the ancient Greeks, but continued in the 11th century in the form of deodands. Deodands are property that have committed a crime. As punishment, the offending property would be handed over to the relevant governing body so that it could be repurposed for a pious cause. Normally, criminal objects were sold and the money was used for some communal good (in theory at least). Deodand status remained until 1846 in the UK and still exists in the US today, last being referenced by the US Supreme Court in 1974.

The rationale behind such seemingly nonsensical allocations of responsibility is the aim of providing a sense of justice in society: nothing can escape the grasp of law and justice. Today, there are new objects that are a source of public outrage: computers. The algorithms inside computers have been found to create certain racial, gender and classist biases. But, no “mutant algorithm” has yet received its just deserts in the courtroom.

So, how should we respond to algorithmic injustices? Should we grant algorithms deodand status and cross examine computers in court?

‘Computer says no’

Algorithms are simply a logical “set of rules to be followed in problem-solving”. Today, they are used in technology for a variety of purposes. In the US, for example, algorithmic decision making (ADM) is used by certain banks, human resources teams, university admissions, health services and the criminal justice system. This means that a citizen’s ability to get a loan or a mortgage, get a job, get into university, receive medical care and be treated fairly by the criminal justice system is at the discretion of a computer.

There is a reason why they are this widely used: algorithms can contribute a great deal to making our lives more efficient and fairer. Humans suffer from flaws such as biases, making mistakes or being inconsistent. An algorithm does not suffer from these flaws and therefore has the potential to make a positive impact on society.

But this does not mean that algorithms are perfect, especially when making decisions on nuanced human issues. From a computer science perspective, using algorithms to achieve policy goals is complicated and unpredictable. The use of big data sets and machine learning (where a computer independently interprets patterns that inform its decisions) give rise to instances where “computer says no” with unintended consequences that can breach the law or offend society.

And when something goes wrong there is a great deal of outrage. Recently, A-Level students angrily protested against the UK’s exam regulator, Ofqual, who used what Prime Minister Boris Johnson later described as a “mutant algorithm” that downgraded almost 40% of students’ grades, with a disproportionate effect on those from underprivileged backgrounds. Similarly, there have been troubling results from algorithms used in the justice system, whilst Facebook’s ad targetting algorithms have breached laws on discrimination as well as offended public sentiment about inequality.

As recent surveys have revealed, scandals like these only mean that people increasingly distrust algorithms that, despite their imperfections, offer important improvements to society. A common response to these issues is calls for increased transparency and accountability. But due to its technical nature, algorithmic accountability can be complicated to determine and difficult to communicate to those who do not have expert knowledge of computer science. These limitations on transparency have fuelled public frustration and distrust of algorithms.

So, is there a way to get justice (or at least a sense of justice) from any future mutant algorithms?

Want to write for the Legal Cheek Journal?

Find out more

Dealing with unaccountable crimes

The possibility of bringing back deodands for algorithms offers an interesting solution. The justice system’s long history with inanimate objects reveals a great deal about the social function of trials.

From a technical perspective, putting inanimate objects on trial is madness. Yet, ancient Athens developed a special court called the Prytaneion that was dedicated to hearing trials of murderous inanimate objects. Policing and applying law to these objects was also of great importance. In Britain, from 1194 until 18th century, it was the duty of a coroner to search for suspicious inanimate objects that could have played a role in a sudden or unexpected death, whilst determining the fate of criminal objects was frequently a topic of heated debate for senior statesmen in ancient Greece. Why?

Both in ancient Greece and Britain, these trials played an important role in responding to unaccountable (due to the absence of a guilty human), yet traumatic crimes. For the ancient Greeks, murder was especially traumatic because they felt a shared responsibility for the crime. This meant that they spent a great deal of time and money symbolically providing a narrative for the rare occasions when a murder was unaccountable (as happened with the statue of Theagenes of Thasos).

Moreover, the variation in juries’ conclusions reflects how these trials were primarily concerned with social justice within specific communities rather than broad legal rules. This can clearly be seen from the regional differences in Medieval British deodands in horse and cart accidents. In one case in Oxfordshire, juries felt that a cart and its horses were responsible for the death of a woman called Joan. In another case, only a specific part of a cart was deemed guilty of murdering a Yorkshireman, whilst its fellow parts were spared. But in Bedfordshire, the blame for the death of a man named Henry was pinned upon a variety of culprits: the horses, the cart, the harnesses and the wheat that was in the cart at the time of the incident.

From the Prytaneion to Parliament Square

Deodands not only provided a mechanism for developing a communal narrative to crimes where justice is hard to find, but have also replicated compensatory justice. Fast forward to the industrial revolution in 19th century Britain and communities were faced with injustices created by new technology.

Like algorithms, trains and railways incited outrage and fear in the 1830s and 1840s. In 1830, William Huskisson, the then MP for Liverpool, was so terrified when he got off a train on a new passenger line that he ran off the platform and onto the rails. His death was a landmark railway passenger casualty that overshadowed calls from engineers, architects and enthusiasts to embrace the wonders of the “iron road” in the minds of the public.

This led to a strong revival in deodand cases. Trains were being put on trial and train companies had to pay whatever the jury felt was fair. In the absence of widespread life insurance, deodands for the murderous trains were set at very high prices in order to compensate victims. Ultimately, the sympathy of jurors led to excessively costly rulings and provoked parliament to abolish deodands in the UK. Nevertheless, the 15 years in which they were effective allowed for a form of compensatory justice to develop in reaction to new and traumatic technology, which helped to boost public confidence in trains.

In the same way, algorithms have great potential to help create better solutions and a more equal society. The problem is that the public does not trust them. Just as the “iron roads” were feared in the 1830s, today the menace of an all-powerful black box algorithm dominates their public perception. The potential effect of granting algorithms deodand status is twofold. First, putting computers on trial publicly provides a performative and ceremonial sense of justice, similar to the ancient Greeks trials in the Prytaneion. This ceremonial justice has already been highlighted by artists like Helen Knowles. Second, providing compensation that is legally designated to a guilty computer communicates a direct and powerful sense of legal accountability to the public.

Although I doubt taking algorithms to the Supreme Court in Parliament Square is high on the UK Digital Taskforce’s agenda, when considering policy for algorithms, it might be worth investigating the age-old legal and communications trick of the deodand.

William Holmes is a penultimate year student at the University of Bristol studying French, Spanish and Italian. He has a training contract offer with a magic circle law firm.

The post Why the law should treat algorithms like murderous Greek statues appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/why-the-law-should-treat-algorithms-like-murderous-greek-statues/feed/ 0