Is India Equipped to Defend Privacy in the Era of Artificial Intelligence?

Posted on July 28, 2020

Authored by Aarya Pachisia

Image Source: REUTERS/Damir Sagolj – RC1838EC3EA0/

Introduction: Need for Protection of Privacy

Privacy violations have become inevitable in the age of Artificial Intelligence (“AI”). Cambridge Analytica is a classic example of how non-personal data of citizens can be used to predict sensitive personal information about them, with the use of one single algorithm[1]. AI software, today, is sophisticated enough to compute and predict the behaviour of an individual and chart possible patterns through data already provided to them; thereby, blurring the lines between personal and non-personal data[2]. Such a grave invasion of privacy calls for a robust legal framework to provide a check on the rapidly advancing sphere of AI.

On 19 February 2020, the European Commission (“Commission”) issued a White Paper on AI with proposals for regulating the same. Subsequently, the Commission, invited comments from stakeholders to the same. The Commission recognised the complexity and the inability of the present framework of European Union’s General Data Protection Regulation (“GDPR”), a legislation that protects the privacy of its citizens, to cope with the constantly changing dynamics of AI. Therefore, it becomes imperative for lawmakers to design a legislative framework which regulates AI and ensures its compliance with the European standard of maintaining privacy.

In India, the right to privacy has received due recognition recently in the landmark judgment by the Hon’ble Supreme Court in K.S. Puttaswamy v Union of India[3], where a nine-judge bench unanimously held it to be a fundamental right under Article 21 of the Constitution of India. The rights and obligations with respect to data privacy are codified in the Personal Data Protection Bill, 2019 (“Bill”). The Bill is currently referred to the Joint Parliamentary Standing Committee, headed by Ms. Meenakshi Lekhi, for further suggestions. The Bill finds its roots in the GDPR. The aim is to establish mechanisms for protection of personal data and proposes the setting up of a Data Protection Authority of India for the same.

The aim of this article is to assess whether the Bill is equipped to deal with privacy violations in the face of lightning advancements in the field of AI. Although, there are various questions pertaining to the crossroads, this article shall specifically analyse parts of the Bill and focusing on the following key questions:

  1. Whether processing of data with respect to AI is envisaged within the scope of the Bill?
  2. What are the principles governing the processing of data with respect to AI in the Bill?

Data Processing under the Bill

Section 2 of the Bill addresses the applicability of this Bill to certain categories of data processing. Data has been divided into two categories: (a) data that can be traced back to an individual, i.e., personal data; and (b) anonymized data. Anonymization is the process of removing personal identifiers from data, thereby preventing said information from being traced back to a particular individual. The Bill defines the process of anonymization as “in relation to personal data, means such irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified[4]. Personal data is defined under the Bill as: data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling”[5] . Thus, personal data must be identifiable to a natural person.

AI algorithms ideally process data in two stages: (i) algorithmic, and (ii) actual usage of data to predict behaviour. In the algorithmic stage, the user trains the algorithm by providing necessary data. In the following stage, the software then predicts the behavior from the data with which it was trained in the earlier stage. Two glaring problems present themselves regarding the scope of the definition provided under Section 3 of the Bill.

First, the provision does not expressly include the processing of data during the algorithmic stage. The EU bodies had to provide an express clarification regarding the same.  Data provided to such algorithms can be discriminatory in nature, and therefore, it becomes imperative to include processing of data at this stage within the scope of applicability of the Bill irrespective of anonymity of such data. No express clarification has yet been provided by the Indian authorities on this subject. Since, the legislation is not yet notified, there stands a good chance of either the government clarifying the same or the courts reading the algorithmic stage within the scope of Section 2 of the Bill.

Second, Section 2(B) of the Bill only includes anonymized data within its scope, if it is being processed by the government after being taken from data fiduciaries. This is highly problematic at two levels. First, recent research[6] has shown that anonymized data can be deanonymized easily, enabling such data to be traced back to its principal. Second, it only envisages processing by the government and not by private parties, thereby creating a grey area where corporations can commit possible privacy breaches without any consequences. The AI giants in the private sector will not be held liable for processing anonymized data as it is not included within the scope of the Bill. They might use it to train algorithms or deanonymization, it shall not have any consequence as processing of anonymized data if not by the government does not fall within the scope of the Bill.

Moreover, if private entities provide AI algorithms with anonymised data, then such processing would fall out of the scope of the Bill’s applicability. Therefore, it becomes crucial to include processing of data by private entities irrespective of the nature of data provided to the AI algorithm.  When the Parliament decides to enact a legislation dealing specifically with AI the same provisions have to be incorporated in that legislation as well.

In order to make the following arguments valid, I shall be assuming that the above-mentioned recommendation have been added to the Bill and then, analyse if there is any further lacuna that needs to be filled to protect privacy in the era of AI.

Provisions Governing Data Processing by AI

In this article, I shall specifically analyse the provisions dealing with the following principles: purpose limitation; storage limitation; accuracy principle; and the right to be forgotten. The Bill envisages the principles within its scope and states that the central government can exempt any government agency from the Bill and the Right to Be Forgotten. Such a provision did not exist in the 2018 draft Bill.

Purpose Limitation

Purpose limitation refers to the act of processing data only for the purposes that has been consented to by the data principal. The Bill ensures the same under Sections 5, 6 and 7 that consent and free will are important elements to allow for processing of data by the data processor or the fiduciary. The data fiduciary is under an obligation to provide a notice specifying the purpose, grievance redressal mechanism, nature and category of data and every other possible relevant information connected to the processing of personal data. Thus, the principal shall be informed about who has access to their data and for what purpose their data will be used.

Another important aspect of consent is that it should be as easily withdrawn as it was given. This Bill dilutes the consent framework and attaches a pre-condition to such withdrawal thereby depriving the principal of free will. It states that the principal should have a ‘valid’ reason to withdraw consent but who determines the validity of said reason has not been mentioned If the consent is withdrawn without a valid reason, then all legal consequences for the effects of such withdrawal shall be borne by the data principal. Therefore, diluting right to privacy. This problem is not specific to AI only but is an overall critique of the consent framework provided in the Bill.

Storage Limitation

The storage limitation principle provides that once the data has served its purpose, it should be deleted. Under Section 9 of the Bill, the data fiduciary is obligated to do the same. Jurisdictions globally have adopted this principle along with very few exceptions in order to secure the privacy of its citizens. For instance, in Canada, the data that has served its purpose, ought to be destroyed, anonymised or erased. There are similar provisions that can be identified in South Africa, Australia and United Kingdoms.

Accuracy Principle

Accuracy principle is extremely essential to AI. The necessity to have accurate data is envisaged under Section 8. The data that shall be used to train the algorithm needs to be accurate for the machine to predict appropriate results. If the data is inaccurate, the result shall be incorrect, thereby harming the efficiency of the AI software. For instance, if the process of giving loans has been automated and the data that has been inserted to train the software is inaccurate. In such a case, a person who is ineligible to receive a loan shall be given one, therefore emphasizing upon the importance of accurate data.

Right to be Forgotten

Right to be forgotten is an essential principle governing privacy in the 21st century. The principle has been recognized in GDPR as well as the Bill. The principal lays down the right to prevent the disclosure of an individual’s personal data by a fiduciary on the basis of certain grounds, for instance, the purpose for which the data was collected has been fulfilled or he consent for processing data has been withdrawn.  

The question that arises in this case is, how do we teach a software to forget data that has been used to train it in order to predict a certain pattern? It is necessary to answer this question because the upcoming generation is growing up in the era of AI. Their mistakes, movements, engagements, achievements are constantly being recorded. Their life is constantly being processed. COVID-19 has forced us to transition into a digital way of life. Work, as well as education are now digitised. All communications and classroom lectures are recorded.  Since India still does not have a legislation in place, it is still not clear how this wealth of data is currently being processed and the entities it is being shared with. This poses a grave threat to the privacy of every individual. The data of millions of students, professors and employees is constantly being fed to an algorithm.


We live in a time where each and every one of us will be heavily impacted by AI. The advancement of AI gravely threatens the privacy of an individual. From smart home appliances, to the products we buy on e-commerce platforms, most online platforms are constantly predicting our behaviour and blurring the lines between personal data and non-personal data. The major problem with AI is that even the developers do not understand how an AI arrives at an outcome creating a black box. This creates a lot of mystique surrounding AI. Research on teaching AI to forget is in the preliminary stage and it is one of the most important principles that will enable AI and privacy to co-exist. Therefore, it is vital to ensure and uphold the right to be forgotten. It is necessary, to not only strengthen the Bill, but also to introduce a legislation or an express clarificatory directive which specifically deals with regulation of AI.

Aarya Pachisia is a 4th year law student at Jindal Global Law School. She is extremely interested in issues surrounding data privacy and AI. She is currently interning at Indian Society for Artificial Intelligence and Law that motivated her to write this piece.

[1] Cambridge Analytica, a British consulting firm, misappropriated Facebook users’ data resulting in a massive data leak also addressed by Facebook executives. Complaints were also filed against Cambridge Analytica before the Federal Trade Commission in the United States.

[2] See, Keith Collins and Gabriel J.X. Dance, ‘How Researchers Learned to Use Facebook ‘Likes’ to Sway your Thinking’ New York. Times, Mar. 20, 2018, at B5, available at  

[3] (2017) 10 SCC 1

[4] Section 3(2), Personal Data Protection Bill 2019

[5] Section 3(28), Personal Data Protection Bill 2019

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s