Law student Tanzeel ur Rehman explains how AI is being used to revolutionise competition laws
Insidiously intercepting a trader en-route to the market, buying up his goods and then inflating the price, was once considered to be one of the most heinous crimes known to law. “Forestalling” as it was called, also became the beginnings of antitrust or competition law. By enforcing the ‘King’s peace’, this offence was punishable with a heavy fine, forfeiture and some humiliating time in the pillory.
A millennium later, anticompetitive practices are not as simple as the Anglo-Saxons perpetrated. This is the age of Big Data and AI, paving way for more subtle and evasive forms of monopolistic conduct. The complexity of a digital marketplace has given rise to technologically facilitated anticompetitive techniques. Antitrust enforcement is now faced with the dilemma of playing catch-up to rapidly fluctuating business behaviour.
Dynamic-pricing by “algorithmic collusion” is one such example. In 2011, Amazon’s algorithmic pricing used by two booksellers had comically resulted in skyrocketing the price of a used book to nearly $24 million. In 2015, Uber was charged with monopolistic conduct for using its price-surging algorithms. Although exonerated of the charges due to lack of evidence (of human collaboration), the District Judge highlighted that: “The advancement of technological means for the orchestration of large-scale price fixing-conspiracies need not leave antitrust law behind”. In the same year, an art dealer pleaded guilty of colluding with other dealers to fix the prices of artworks on Amazon with the help of dynamic-pricing algorithms. Perfect price discrimination, once considered an impossibility, is now a reality. It is evident that the laws and enforcement techniques created to control the monopolies of the industrial age are struggling to keep pace with the information age.
The question, then, that is begging to be asked is whether computational law is the future of antitrust? Let’s analyse. Computational methods are already being used to detect, analyse, and remedy the increasingly dynamic and complex nature of modern antitrust practices. In 2017, mechanised legal analysis was employed by the European Commission to study 1.7 billion search queries in the Google Shopping case. In 2018, the Commission used similar tools to examine 2.7 million documents in the Bayer-Monsanto Merger. This nullifies the well-documented historical mismatch between “law time” and a market’s “real time”. This also presents a strong case that antitrust regulators can better analyse the prevailing market practices by employing computational tools. For example, The American Federal Trade Commission (FTC) is using a software called ‘Relativity’ in order to analyse companies’ internal communications for the purpose of identifying monopolistic conduct. A step further, application programming interfaces (APIs) can play a significant role in creating channels for transfer of data between companies and regulators.
Want to write for the Legal Cheek Journal?
Find out moreMerger control is another dimension of antitrust regulation where computational tools have useful applications. Analysis of vast datasets is the backbone of merger analysis. A persistent problem is that companies are in control of the data being sent to regulators. In both the DuPont and WhatsApp cases, the EU Commission highlighted that the parties were guilty of withholding or providing misleading data in the investigations. APIs could fix this by establishing systemised communication links between companies and regulators in real-time. The use of ML (machine learning) and AI, in auditing millions of documents, enables “finding the needles in these haystacks”. In addition to this, blockchain could be used to create impenetrable databases, ensuring integrity.
Judges and regulators are often faced with doctrinal questions pertinent to anticompetitive behavior. Computational legal analysis (CLA) can be utilised to unravel existing patterns in judicial decisions, contracts, constitutions and existing legislation. A perfect example is the use of aggregated modelling to analyse linguistic patterns in Harvard’s Caselaw Access Project (CAP). Such modelling can help create topic clusters that revolve around specific antitrust doctrines for example, predatory pricing or shifting market power. This can be helpful to both courts and regulators adjudicating unique antitrust cases or investigations. By providing context, trends and connections between various doctrines, such modelling has practical utility in tackling complex or novel scenarios.
Computational tools have interesting implications on the current framework of merger reviews. More specifically, they can be of great use to predict “killer acquisitions”. The current law requires the adjudicator to use a combination of precedent and guesswork when forecasting a killer acquisition. Modern ML and AI tools can be of greater assistance, in order to reach a more accurate prediction. The use of autoencoders to assess dynamic market environments is a very suitable approach. The stacking of multiple autoencoders (for example, embedding, translation, and detection) can help identify fact patterns. Iterative processes can be used to converge towards an optimal prediction vis-a-vis intervention.
Forestalling became an obsolete offence more than a century ago, but the monopolistic behavior it embodied, has now transformed into more creative and sophisticated ways. It would not be incorrect to say that the antitrust policies of today and the ‘Kings’ peace’ of the bygone days, both share a common sentimentality of consumer welfare. In this digital age, being forewarned is forearmed. So, unless you are willing to pay millions of dollars on Amazon for a used book, or twice the fare for an Uber ride, it’s about time you put your trust in computational antitrust.
Tanzeel ur Rehman is a second year law student at the University of Sindh, Pakistan.