top of page
  • Vanshika Arora, Rimpal

From Code to Collusion: Challenging the Status Quo in Regulatory Framework

[Vanshika and Rimpal are students at Army Institute of Law, Mohali.]


The proliferation of algorithms in digital markets has ignited debates concerning their potential effects on competition. There is a growing concern regarding how these advanced tools, increasingly utilized by numerous businesses, might impede competitive dynamics, by affecting business decisions regarding competitive pricing.


One standpoint of this multifaceted debate contends that the utilization of self-learning pricing algorithms represents a substantial menace to competition, necessitating immediate action. Conversely, others argue that this assertion is an ‘exaggerated hypothesis’ lacking empirical evidence or real-time instances. According to EU Competition Commissioner Vestager, the notion of machines colluding without human interaction is considered to be akin to "science fiction".


However, it has been argued that a basic algorithm variant can autonomously grasp collusive outcomes without explicit communication, aligning with concerns voiced by scholars about algorithmic collusion. This is not to say that in the future self-learning algorithms can collude, as we know yesterday’s science fiction is often today’s reality.


But, can these algorithms raise prices? Indeed, they can. It all started with flies. The Making of a Fly, a textbook for fly researchers, was priced at almost USD 4 million on Amazon while it started at USD 35. This trend sparked significant concerns regarding potential violations of antitrust laws, particularly price-fixing agreements. A study suggests that an attacking algorithm progressively learns the pricing strategy of its competitors over time and then uses this knowledge to maximize its own profits by anticipating the reactions.  Thus, the black box nature of algorithms leaves no doubt that algorithms have the potential to enable pricing strategies that may hinder fair competition.


This article starts by analyzing the prospect of collusion by self-learning algorithms. Part I explores the varied ways in which algorithms collude. Part II identifies the gaps in regulations and determines whether the law, as it stands, is adequate to regulate algorithmic collusion. Part III attempts to bridge these gaps by proposing suggestions to revamp existing regulations.


Part I: How do Algorithms Collude?


Collusion refers to a concerted action between parties to act secretly or illegally with the intention of deceiving or cheating someone. Traditional collusion entails explicit communication and coordination among companies to manipulate prices or stifle competition. Conversely, algorithmic collusion pertains to the utilization of self-learning pricing algorithms to achieve collusive objectives. Algorithmic collusion can manifest in various forms, contingent upon the particular context in which algorithms are employed. Ariel Ezrachi and Maurice E. Stucke identify four methods through which algorithms can facilitate collusion:


With conclusive evidence of agreement


Here, individuals establish either horizontal or vertical agreements to foster collusion, employing algorithms as instruments to execute the predetermined collusion.


Messenger


In the messenger category, there is undeniable proof that humans who are the “masters who map out the cartel” harbored the intention to collude. The computer algorithms serve as the messengers, in that the cartel members program the computers to help effectuate the cartel, and monitor and punish any deviation from the cartel agreement.


Hub and spoke 


The hub and spoke category of algorithms focuses on the competitor’s use of a single platform to determine the market price or react to the market changes. This form of collusion pertains to an arrangement established by either vertical or horizontal participants (spokes) through the utilization of a platform (hub), resembling an implicit agreement. In this scenario, the common algorithm that traders use as a vertical input leads to horizontal alignment.


Without conclusive evidence of agreement


Ezrachi and Stucke present two other alternative scenarios where collusion agreements are not apparent.


Predictable agent


In the predictable agent category, companies unilaterally design algorithms to yield predictable outcomes with programmers acknowledging that other competitors are likely adopting similar algorithms, yet no agreements are reached nor explicit communication occurs between them during the coding process.


Digital eye


Companies unilaterally design “black box” algorithms that, through machine learning or artificial intelligence, will independently evolve to determine the best way to maximize profit. There is an absence of any agreement (explicit or implicit) among competitors to collude similar to predictable agents.

 

Part II: Navigating Regulatory Void


Algorithmic collusion, whether explicit or tacit, poses a challenge to the manner in which traditional anti-cartel enforcement functions. This is primarily because, the scheme of prohibitions under the current regulatory regime under Section 3 of the Competition Act 2002 falls short in capturing algorithmic collusion by self-learning algorithms. Several pertinent questions remain yet to be answered. While formation of a conscious, deliberate cartel involves ‘meeting of minds’, does an absence of such a smoking gun, as in cases of unplanned behavioral anti-competitive conduct by algorithms, account for collusion? Additionally, can the regulator deploy the per se rule without establishing evidentiary intent? Similarly, what constitutes evidence in conscious parallelism? Can the focus on existence of an illicit agreement, or concerted action through the ‘human’ prism shift to the criteria of concurrence of wills between colluding parties, that have benefitted from the price differential propelled by collusion? The question of liability is pivotal to this analysis. It is relevant to ask and determine whether autonomous algorithms fall under the category of ‘individuals’ or can be treated as an extension of agents that employ them, who implicitly (even remotely) are responsible for according instructions and specifications to such algorithms. We explore these questions and break down several enforcement techniques for each of the above described categories of algorithmic collusion. 


Section 3(3) of the Competition Act 2002 outlines three elements of collusion, namely (i) existence of an “agreement entered into” or “practice carried on” or “decision taken by”; (ii) “persons or association of persons or enterprises or association of enterprises”; (iii) which “directly or indirectly determines purchase or sale prices”. On fulfillment of these three criteria, an appreciable adverse effect on competition is presumed.

 

Therefore, Section 3(3) prohibits not only explicitly collusive ‘agreements’, but also ‘practices’ and ‘decisions’ taken by persons, association of persons or enterprises.

 

The Competition Commission of India (CCI) has had several (missed) opportunities to evaluate collusion through algorithmic malpractices since, in all such cases, the regulator has taken a restrictive approach. The commission’s decision in Samir Aggarwal established the ‘direct evidence test’, requiring pre-establishing of cogent, direct, and explicit evidence, mandating a ‘meeting of minds’. A similar decision was taken in the Re: Domestic Airlines case, wherein the commission did not accept the findings of the Director General due to lack of material evidence. Notably, the CCI has favored establishment of strong ‘intent’ in the above referred cases, further laying down a ‘two-step test’ in a separate case. The first step requires establishing the existence of collusion through ‘strong human intervention’, with the second being examining the role of algorithms. 

 

The authors submit that there are grave issues in the standardization of this two-test verification of algorithmic collusion. Fundamentally, this test sets the impression that computerized, machine driven collusion is not condemnable. Second, it points to the lack of appreciation of evolving sophisticated machine-learning algorithms and their capability to collude. Third, the commission’s misjudgment regarding subjectivity of ‘intent’ in digital markets, especially where price is determined with the help of autonomous algorithms, points at another fallacy in these decisions.

 

Part III: Bridging Regulatory Gaps: Proposals and Suggestions


In this section, the authors have provided a methodical and systemized enforcement strategy for the foregoing types of collisions.

 

One straight-jacket solution to circumvent the problem of algorithmic collusion would be to entirely restrict the use of self-learning, price-setting algorithms. This approach would, however, be deemed unacceptable since absolute restriction could preclude the realization of numerous advantages and stunt innovation. A second option would be restricting the class of permissible algorithms or prohibiting algorithms with specific collusive features. However, this would require a host of regulative agencies to deploy manpower and costs, to regularly monitor algorithms and classify them into permissible or non-permissible categories. A third option would be incorporating revised legal provisions in competition policy. For the previously mentioned kinds of collusive practices, the following regulatory techniques are proposed.

 

Messenger


Since algorithms would only serve as a ‘messenger’ of price fluctuations, there would exist strong explicit evidence of collusion, and the per se rule of illegality can be applied. Use of computers can be perceived as an ‘extension’ of agreement to collude, having the ‘object’ of restricting competition. Liability can be clearly and easily earmarked at cartel members and persons and enterprises involved.

 

Hub and spoke


Through the lens of enforcement, this category is difficult to regulate. If it can be established through computing wisdom, that the algorithm has indeed been engineered to facilitate collusion, the test of ‘hub and spoke cartel’ under the second proviso of Section 3(3) of the Competition Act 2002 can be applied. Adversely, the commission may make use of the rule of reason standard in the cases where indirect information exchange regarding future pricing can be established, however, the propensity of collusion through the use of algorithms is bleak but not absent.

 

Predictable agent


In such a case, the regulator may want to evaluate an ‘unfair practice’ and not an agreement. The ambit of regulation of unfair trade practices falls under the Consumer Protection Act 1986, and not the Competition Act 2002. Algorithmic collusion of this kind can be regulated by consumer courts. However, challenges exist as they lack investigative powers comparable to the CCI and lack authority to impose significant deterrent fines on enterprises. The authors submit that a major revamping of regulatory framework would be required to address this grey area. In any case, an independent and specialized consumer protection agency can be instituted, or ‘unfair trade practices’ may be shifted under the ambit of regulation by the CCI, or the Consumer Protection Act 1986, can be amended to provide for investigatory and injunctive powers, akin to the Swedish Marketing Practices Act 2008.

 

Digital eye


In this context, coordination achieved through tacit means is not the result of deliberate human planning but rather emerges from the evolution, self-learning, and autonomous execution of machines. Since conscious parallelism is not illegal, self-learning AI may escape legal scrutiny, due to lack of intent.

 

Conclusion


The issue of algorithmic collusion is a pressing concern supported by empirical evidence and a growing body of legal and economic analysis. Therefore, preventive action warrants urgent consideration, as the conventional approach used in courts to prosecute collusion among human decision-makers typically hinges on intent and communication, that are aspects not easily applicable to algorithms. As a result, it is crucial to ensure that algorithms are developed and supervised to prevent collusion, and to guarantee that they generate fair and ethical results, leading to the need for a reassessment of current laws and regulatory policies to address this impact.



Related Posts

See All

Σχόλια


bottom of page