[Bhumika is a student at the Faculty of Law, Aligarh Muslim University.]
Addressing and clarifying the grey areas by issuing regulations, the Securities and Exchange Board of India (SEBI) has cautioned investment advisers and research analysts pitching artificial intelligence driven investment models. The new regulations will compel service providers to adopt pragmatic approaches and implement sufficient checks and balances for artificial intelligence back-testing and deployment.
Ensuring preparedness for any artificial intelligence / machine learning issues that may arise in the future, the market regulator specified stringent regulations assigning responsibility to the intermediaries and persons regulated by the SEBI that use AI tools in conducting their business while servicing their clients to make users more serious and at the same time ensure investors’ protection.
This post aims to analyze the regulatory framework notified in response to the SEBI’s concerns about ensuring the integrity of client data and transparency of processes by attaching due weightage to the fixation of liability on users of artificial intelligence in the investment sector.
Background
On 13 November 2024, the SEBI released a consultation paper inviting public feedback on proposed amendments with respect to the rise in use of artificial intelligence tools in financial markets to enable stakeholders to make informed decisions and thus may play an increasingly significant role in market analysis, stock selection, investment strategies and in building a portfolio in their invested securities. However, while recognizing the need for intermediaries to embrace such latest techniques and artificial intelligence tools, the importance of ensuring the protection of investors was equally flagged.
To advance the objectives outlined in the consultation paper, the SEBI notified SEBI (Investment Advisers) (Second Amendment) Regulations 2024 on 17 December 2024. The amendments made the investment advisers using artificial intelligence solely responsible for data security, advice, confidentiality, integrity of client data, and legal compliance. They must also disclose the extent of artificial intelligence usage to their clients. Attaching due weightage to concerns over potential data security risks associated with artificial intelligence tools the SEBI emphasized the need for robust security measures to prevent unintended data exposure.
Following the two notifications, the SEBI released guidelines for investment advisers and research analysts on 8 January 2025. The amendments imposed a mandatory responsibility and accountability on persons regulated by the SEBI who use AI tools either designed by it or procured from third-party technology service providers, irrespective of the scale and scenario of adopting such tools for conducting business and servicing its investors.
What are SEBI’s Concerns?
The use of artificial intelligence is inevitable in almost every aspect of business due to its pivotal role in driving the remarkable transformation of the country's capital markets. AI allows organizations to leverage technology, irrespective of size or expertise. It is hugely democratizing the ability to deploy technology within an organization. No longer does an organization have to rely on only a small set of people or highly specialized, technologically trained individuals to deploy technology.
However, the rapid adoption of such technology has cautioned the market regulator for a variety of reasons, including data leakage risks, confidentiality and integrity of client data, and opacity of process. AI applications generate their outputs based on two key elements: user inputs and the data they have been trained on. Even a small change in the training data can lead to AI learning and processing information differently, potentially producing unexpected or undesired results. This risk becomes even greater due to the sheer volume of data AI systems used by regulated entities handle. Privacy and confidentiality risks with AI technology continue to evolve with its potential.
Concerns about AI's transparency, accountability, and ethical implications have been present since advanced AI technologies emerged. Complex models like deep learning algorithms often function as “black boxes”, delivering accurate predictions without clearly explaining how they arrived at their conclusions. This lack of clarity raises important questions about who is responsible for AI-driven decisions, especially in legal contexts.
Assessing Regulatory Framework
Sheltering the monetary interests of the investors, the SEBI in SEBI (Investment Advisers) (Second Amendment) Regulations 2024 notified the following revisions in the existing framework:
In terms of Regulation 15(14) of the SEBI (Investment Advisers) Regulations 2013 (IA Regulations), an investment adviser who uses Artificial Intelligence tools, irrespective of the scale and scenario of adoption of such tools, for servicing its clients shall be solely responsible for the security, confidentiality, integrity of the client data, use of any other information or data to arrive at investment advice, investment advice based on output of Artificial Intelligence tools and compliance with any law for the time being in force.
Further, in terms of Regulation 18(9) of the IA Regulations, an investment adviser shall disclose to the client the extent of the use of Artificial Intelligence tools in providing investment advice. b. Investment Adviser shall provide the disclosure of the extent of use of Artificial Intelligence tools by them in providing investment advice to their clients at the time of entering into the agreement and make such additional disclosure whenever required.
A similar revision was made in Regulation 24(7) and 19(vii) respectively of the SEBI (Research Analysts) Regulations 2014.
The regulations address two crucial aspects. To begin with, the SEBI’s approach towards the use of AI tools outlines the duties and obligations that accompany their deployment. The second consideration is to focus on transparency, mandating complete disclosure to clients at the time of entering into an agreement. Furthermore, it requires ongoing disclosures whenever necessary, regarding the extent to which artificial intelligence is used in providing investment advice.
The market regulator adopted an overarching approach stance by encompassing extensive use of artificial intelligence tools regardless of the scale or context in which such tools are adopted. The regulations impose a multifold responsibility upon investment advisers and research analysts for the privacy, security, and integrity of investors' and shareholders' data maintained by it in the fiduciary capacity throughout the process involved as well as for the output arising from the usage of such tools and techniques it relies upon or deal with.
Concluding Note: Implications and Compliance
In the general run of things, AI tools are used by intermediaries to disseminate investment strategies and advice upon the same, risk management, data analysis, behavioral analysis, and generative artificial intelligence for reviewing documents. While recognizing the need for market intermediaries to embrace the latest technologies, imposing dependability is expected to bring more seriousness to such users to ensure the protection of investors. Regulations play a crucial role in fostering trust and accountability in the use of artificial intelligence and machine learning technologies. By setting clear standards and guidelines, they ensure responsible innovation, safeguard stakeholder interests, and promote transparency, ultimately paving the way for sustainable and ethical adoption of these transformative technologies.
As per the notification, the existing clients, investment advisers, and research analysts shall comply with the requirements by 30 April 2025. Amid the influx of technological development, artificial intelligence usage is growing fast, and thus to ensure data scrutiny, the entities are mandated to comply with this rule. The homogenous regulatory approach taken by SEBI displays a uniform regulatory attitude towards the use of artificial intelligence and machine learning technologies, prioritizing investor and stakeholder welfare and transparency in the development and use of such emerging technologies.
Comments