May I have some artificial intelligence with my human rights? About the recent European Commission’s Proposal on a Regulation for Artificial Intelligence

by Vera Lúcia Raposo*

 

On 21 April 2021, the European Commission released a proposal for a future European Union (EU) regulation on Artificial Intelligence (hereafter, the ‘Proposal). While its structure sought inspiration in the General Data Protection Regulation (GDPR), the content was built upon other relevant EU interventions, namely the EC’s White Paper on Artificial Intelligence (AI), released in February 2020. The aim is to facilitate and develop the use of Artificial Intelligence within the EU to create a truly Digital Single Market while still protecting fundamental rights, EU values, and ethical principles that are potentially threatened by AI features such as its opacity (some have called it a black box), complexity, the demand for copious amounts of data (big data, with all the attached concerns), and the possibility of automation. So far, so good. We expected nothing less from the European Commission (EC). However, the proposed regulation might fall short regarding its aim.

The definition of AI used by the Proposal is presented as being broad enough to encompass future developments of this technology. However, the fast pace of technology advancement implies that it will eventually be outdated.

The proposal is based on a risk-assessment model, under which three categories of AI emerge:

  • Unacceptable risk AI, banned in Title II of the Proposal. This category includes AI that distorts behaviours (dark patterns), explores the vulnerabilities of specific groups (micro-targeting), scores people based on their behaviours (social scoring), and proceeds to real-time biometric identification for law enforcement purposes in public spaces (facial identification);
  • High-risk AI, as referred to in Title III of the Proposal are allowed, albeit under particularly strict standards;
  • Low-risk AI, for which only transparency requisites are demanded in some cases (namely chatbots and deep fakes), as established in Title IV of the Proposal.

The identification of a serious risk is based on the potential menace to health, safety, and/or fundamental rights. Notably, not only does the label of ‘high-risk’ depend on the specific task performed, but also on its purpose. Moreover, in case AI systems are erroneously classified, the consequences are not clearly defined in the Proposal. A correction mechanism should be in place to amend any wrong classification, but the Proposal remains silent on this issue.

High-risk AI systems are products or safe components of products, whose regime has been harmonised under EU law by the legislation listed in Annex II (medical devices, toys, elevators, and many others); or stand-alone AI systems that pose special threats to fundamental rights and that are referred in Annex III. These lists shall be updated in accordance with the latest technological developments.

For high-risk AI, a two-step system of control and monitoring is built. First, the control is performed through mandatory requirements to be complied with by AI systems before entering into the market. A conformity assessment is undertaken by a third party following harmonised standards, which may lead to the issuance of the ‘CE marking of conformity’. Second, an ex-post monitoring mechanism will be put in place by market surveillance authorities which use the powers conferred to them by Regulation (EU) 2019/1020 on market surveillance. Moreover, AI providers are also obliged to report severe incidents and malfunctions.

Low-risk AI systems almost go unnoticed in the Proposal, except for the provisions on transparency. The two-step procedure is not required in this case. However, providers can voluntarily accept to do so by creating their own codes of conduct (Article 69).

source: https://euraxess.ec.europa.eu/

Such legal framework requires the set-up of specific organs at the national level (notifying authorities and notifying bodies), as well as the European level, particularly the European Artificial Intelligence Board. The latter is a supranational supervisory authority, similar to the European Data Protection Board (EDPB), created under the umbrella of the GDPR, and also to the recently proposed European Board for Digital Services established in the proposal of the Digital Services Act.

Another innovation is the EU database, in which stand-alone high-risk AI must be registered before entering into the market, to facilitate transparency and tracking. The information is supposed to be upload to the database by the provider of high-risk AI systems. Neutral entities, such as the notifying bodies, could be entrusted with this task for all AI systems. Such solution would lead to better results in terms of AI’s safety and transparency.

Accountability is another matter of concern. Indeed, compliance requirements are carried out by AI providers, i.e., the ones that develop the AI and put the AI system (or its outputs) on the EU market, even if they are not established within the EU territory. In addition to the above-mentioned tasks on control, monitoring and information update, their obligations include the implementation of risk management and quality management systems, the development of detailed technical documentation, the maintenance of automatically generated logs, and transparency obligations which require that the ones interacting with AI systems must be informed that they are.  Moreover, compliance is also expected from AI users, i.e., people or entities established in the EU using AI in a professional context, or not established in the EU but whose outputs are used in the EU, along with distributors and importers. In the event of non-compliance, the Proposal foresees maximum administrative fines of up to €20m or 4% of total worldwide annual turnover, similarly to the GDPR clauses. However, no accountability mechanisms are in place. Individuals harmed by AI systems can barely obtain redress. The 2019 report on Liability for Artificial Intelligence and other Emerging Digital Technologies and the European Parliament Resolution of 20 October 2020 on a Civil Liability Regime for Artificial Intelligence suggest that such a mechanism might be put in place, as it is also supported by Coordinated Plan on Artificial Intelligence, released together with the Proposal.

source: https://digital-strategy.ec.europa.eu/

Some highly controversial topics, such as facial recognition technologies (FRT), are expected to raise discussion amongst experts. Interestingly, the 2020 White Paper did not ban FRT, although a draft version had suggested a time limited ban on the use of FRT in public spaces. The White Paper merely recommended the use of appropriate cautions, without giving much detail on such recommendations. Instead, Article 5 of the Proposal clearly prohibits the use of some forms of FRT. It is expected that Article 5/1/d, in particular, will be contested by the ones defying this technology. The norm bans ‘real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, allegedly one of its more threatening uses from the perspective of fundamental human rights. Considering its potential added value for crime control, the proposal provides for some exceptions: the search of crime victims, including missing children, ‘specific, substantial and imminent threat to the life or physical safety of natural persons, or of a terrorist attack’, and the detection, localisation, identification, or prosecution of a perpetrator or suspect of crimes referred to in Article 2/2. The latter includes grievous crimes, such as terrorism, tariffing of humans beings, murder, rape, but also different crimes such corruption, fraud, facilitation of unauthorised entry and residency. Moreover, the ban leaves several loopholes: it does not cover the use of FRT for law enforcement purposes that does not take place in real-time or that is not carried out in public spaces; FRT by other public entities not related to law enforcement; and FRT used by private individuals and companies. Critics might argue that still many doors are left open.

Overall, the Proposal is more human-rights-friendly than the White Paper, but eventually also more conservative, a potential downside for EU’s competitive digital capacity at a global scale.  At its core, the regulatory ideas – the categorisation of AI systems per levels of risk – are the same in the Proposal and the White Paper. However, the White Paper had a more pro-technology approach and unlike the Proposal it did not elaborate in detail the potential human rights violations. Some have even pointed out that the White Paper was more concerned with the economic outputs of a massive investiment in AI than with its consequences for human rights. Although the critic might be excessive, the White Paper did contain a stronger emphasis on technological development, as it results from the various sections dedicated to this aim (comparatively, there is proportionately more discussion on development and innovation than in the Proposal).

In contrast, the Proposal gives more space to the protection of fundamental rights (though many will argue that not enough), as expressed in multiple binding norms imposing risk assessments and safety requisites, whose violation can lead to severe economic penalties. Assuming this proposal becomes indeed the new AI Regulation, my guess is that European AI developers and manufacturers will be asked to comply with so many different requirements also coming from other norms, the GDPR, the Medical Devices Regulation, the In Vitro Diagnostic Devices Regulation, the Data Governance Act proposal, that AI in Europe will become a scientific topic, not a real industry though.

In sum, the Proposal essentially pivots around two core concepts. The first is compliance, based on a system of harmonised rules, monitoring, and good governance. Consequentially, the second is the principle of trust (‘trust’ and ‘trustwhorty’ are emphasised along the Proposal). On the one hand, developers of AI shall be able to rely on rules to carry out their businesses within the EU market by complying with a unified body of rules for all Member States. On the other hand, AI users should be able to rely on its safety.

Innovation should have been the third characteristic. The impetus to bring about innovation in AI technologies is restricted to Chapter V which, albeit introducing interesting provisions, falls short of what would be required for a truly digital single market. The most promising initiative is the creation of regulatory sandboxes to encourage innovation, though under strict (too strict?) requirements. AI investment in the EU might be hampered by such ‘innovation hole’ which could advantage other leading players. Given the outstanding Chinese technological development, including on AI, the EU might not be able to reach China in the near future (or ever). Whether the Proposal reached a fair balance between innovation and human rights, or conversely whether it will lead the EU to stagnation in the domain of AI remains to be seen.

 

*Vera Lúcia Raposo  is Associate Professor at the Faculty of Law of the University of Macau, China, and Assistant Professor at the Faculty of Law of the University of Coimbra, Portugal. Her main lines of research are health law and digital law.

The UK Adequacy Decision and the Looming Possibility of a Schrems III

by Osal Stephen Kelly*

Introduction

In July 2020, the Court of Justice of the European Union (“CJEU”) delivered its judgment in the Schrems II case brought by the Austrian lawyer and activist Max Schrems, with far-reaching implications for data protection policy and practice. One question of particular urgency is what the consequences will be for the continued flow of personal data from the EU to the UK; while the EU-UK Trade and Cooperation Agreement temporarily allows these flows to continue on the same terms as between member states, this will end on 30th June 2021.  The purpose of this period is to allow for the EU Commission to determine whether or not to grant an “adequacy decision” that would confirm that the UK provides a level of protection essentially equivalent to that of member states, which would allow for these important transfers to continue indefinitely. While the Commission has issued a draft adequacy decision, some of the issues identified by the European Data Protection Board (“EDPB”) in its recent opinion on the draft expose frailties in these protections that could form the basis for a legal challenge in the future. It is submitted that there are two areas of particular vulnerability that would be key in any such challenge. First, there are serious unresolved questions around the powers of UK and US authorities to access data for security purposes. Second, the UK’s emerging post-Brexit constitutional and legal framework is likely to be somewhat less advantageous to data subjects vindicating their rights than was the case when EU law had direct effect.

Schrems II

Schrems II comes after another case brought forward by Mr Schrems who had already challenged the previous framework as well (Schrems I). The Schrems II case arose from a complaint concerning the transfer of his data from Facebook Ireland to Facebook Inc. (based in the United States). The complaint was made to the Irish Data Protection Commissioner and resulted in the Irish High Court making a preliminary reference to the CJEU. In its submissions, Facebook sought to justify these transfers as permitted by the EU Commission’s Privacy Shield decision, which set additional safeguards for data moving from the EU to the US.  However, the Court found that the Privacy Shield was invalid as the protections offered by US law did not in fact afford the required level of protection. The Court stressed the importance of “effective and enforceable data subject rights” (para. 177 of judgment) and found that data subjects did not enjoy such rights under the Privacy Shield. Particular emphasis was placed on the lack of limits on the power of surveillance agencies to collect data on individuals held by companies such as Facebook (para. 180). While the Court recognised that data controllers could in principle rely on standard contractual clauses approved by the Commission to allow cross-border data transfers to continue, it noted that such clauses did not necessarily protect data from unlawful access by the authorities of the receiving country (para. 141).

from jonesday.com

Schrems III?

Although the UK ceased to be subject to EU law from 31st December 2020, the GDPR has been incorporated (with amendments) into UK domestic law, in line with Section 3 of the European Union (Withdrawal) Act 2018. This amended version, referred to as the “UK GDPR”, now forms the basis of the UK’s legal framework for data protection, along with the UK’s existing Data Protection Act 2018 (draft adequacy decision, Recital 14), and this is the framework that was examined in the Commission’s draft adequacy decision, and, subsequently, the EDPB’s opinion, released on 13th April 2021. Although important, the opinion in itself is non-binding and the final decision on adopting the adequacy decision rests with the Commission, so it is likely to be approved.

The EDPB opinion, read in light of Schrems II, would require the UK’s intelligence operations to apply particular scrutiny over the compliance with the (EU) GDPR. While the tone of the opinion as a whole is very measured, the EDPB nonetheless expresses “strong concerns(para. 88 of opinion) over the data-sharing agreement between US and UK authorities pursuant to the US CLOUD Act. The Act requires US companies to disclose information stored on overseas-based servers on foot of a valid warrant. The EDPB notes that the Commission’s draft decision refers to non-binding “explanations that were provided to it by UK authorities (para. 88 of opinion). Critically, however, the EDPB notes that these explanations did not seem to comprise “any concrete written assurance or commitment” on the part of the UK Government. It is difficult to see how mere explanations without substantive legal force could be relied upon by data subjects in enforcing their rights, which is concerning, given that the existence of “effective and enforceable data subject rights” was deemed vitally important in Schrems II.

Moreover, para. 189 of the opinion highlights how broad the general exemption is for intelligence-related processing, stating that “national security certificate DPA/S27/Security Service provides that until 24 July 2024, personal data processed ‘for, on behalf of, at the request of or with the aid or assistance of the Security Service or’ and ‘where such processing is necessary to facilitate the proper discharge of the functions of the Security Service described in section 1 of the Security Service Act 1989’ are exempted from the corresponding provisions in UK law to Chapter V GDPR in relation to transfers of personal data to third countries or international organisations”.

This provision is similarly open-ended to Section 702 of the US Foreign Intelligence Surveillance Act, which had been considered not to afford a sufficient level of protection to data flows in Schrems II (para. 180 of judgment). If Part V GDPR (and equivalent provisions in the UK GDPR) does not apply to intelligence processing, personal data would be transferred to US authorities and thus fall within the scope of the Court’s ruling in Schrems II.

Given that the UK is no longer a member of the EU and subject to the jurisdiction of the CJEU, issues also arise in relation to the UK’s overall legal framework (para. 54 of opinion). The Commission has placed great emphasis on the fact that the UK will continue to be a party to the European Convention on Human Rights (“ECHR”) and thus of the “European privacy family” (press release accompanying the adequacy decision). However, while the set of rights listed in the ECHR are also included in the EU’s Charter of Fundamental Rights, in Schrems II the Court notes that the ECHR is not part of the EU law acquis (paras. 98, 99 of judgment). Furthermore, the UK Government will review the Human Rights Act 1998 which implements the ECHR in the UK. The review will consider whether courts have been “unduly drawn into matters of policy”. Given that the CJEU identified “effective and enforceable data subject rights” as key in determining whether a country provided an adequate level of protection (para. 45 of judgment), any dilution of the rights of citizens to invoke their ECHR rights would be likely to count against the UK in the event of a legal challenge.

Conclusion

The foregoing indicates that a credible case could be brought before the Court to challenge the validity of the adequacy decision in the future. On a practical note, data controllers can at least be reassured by the CJEU’s clarification in Schrems II that an adequacy decision enjoys, in effect, a presumption of legality until it is successfully challenged (para. 156 of judgment), and accordingly they should not incur any liability for data transfers while the adequacy decision remains in place, for whatever period that may be.

*Osal Kelly is a postgraduate Law student in the Law Society of Ireland in Dublin and holds an undergraduate degree in Philosophy from Trinity College, Dublin. He currently works in the Irish public service. This article is written in a personal capacity.

 

EU Data Protection in Trade Agreements

 

by David Scholte*

Practical solutions to a theoretical conundrum

After the implementation of the General Data Protection Regulation (GDPR) in 2018, the European Union (EU) has been striving to keep up the high standards of protection of personal data transfers of EU citizens throughout the world. In order to secure these standards, it has two powerful different tools at its disposal.

Tool number one is the ´adequacy decision´. The EU commission will ´determine […] whether a country outside of the EU offers an adequate level of data protection´.(European Commission, Adequacy Decisions) Adequate means comparable to the protection offered by the EU. If so determined, the cross-border data flow between the EU and the third country can take place unimpeded and without any further safeguards. Tool number two are data protection provisions in the trade agreements between the EU and third countries. (See art. 28.3(2)(ii) CETA, art. 8.3 JEFTA and art. 8.62(e)(ii) EU-Singapore FTA)

The EU is a prominent advocate of liberalising (digital) trade but will always vehemently protect its data protection standards; this is made explicit in the statement that ´the EU data protection rules cannot be subject to negotiations in a free trade agreement´. (COM(2017) 7 final)

Data protection clauses in previous trade agreements used to be sectorial provisions modelled after art. XIV from the multilateral ´General Agreement on Trade in Services (GATS). However, with the changeability of digital trade and with the implementation of the broad scoped GDPR, the EU´s view was that new provisions were needed.

´These horizontal provisions rule out unjustified restrictions, such as forced data localisation requirements, whilst preserving the regulatory autonomy of the parties to protect the fundamental right to data protection´. (COM(2020) 264 final)

In 2018 the Commission published horizontal draft provisions that it intended to include in future trade agreements. It is noted that the provisions modeled after the GATS article have always included the requirement of ´necessity’ and stated that any measure taken with regard to the protection of personal data must not be a ´means of arbitrary or unjustifiable discrimination [or] a disguised restriction´. However, the new provisions would be applicable throughout the agreement and, most importantly, do away with the conditions and limitations found in the old type of provisions.

There are no longer requirements that must be fulfilled before a measure with regard to personal data can be taken. The Draft provisions regarding data protection are as following:

  1. Each party ecognizes that the protection of personal data and privacy is a fundamental right […]
  2. Each party may adopt and maintain the safeguards it deems appropriate to ensure the protection of personal data and privacy, including through the adoption and application of rules for the cross-border transfer of personal data. Nothing in this agreement shall affect the protection of personal data and privacy afforded by the Parties´ respective safeguards.
  3. Each party shall inform the other Party about any safeguard it adopts or maintains according to paragraph 2.
  4. For the purposes of this agreement, ´personal data´ means any information relating to an identified or identifiable natural person.
  5. For greater certainty, the Investment Court System does not apply to the provisions in Articles 1 and 2.
From briefingsforbritain.co.uk

Although the EU had proposed this provision in trade negotiations with Australia and New Zealand. the first agreement where this new type of rules has been fully implemented is the EU-UK Trade and Cooperation Agreement (TCA), albeit in a slightly different form.

  1. Each Party recognises that individuals have a right to the protection of personal data and privacy and that high standards in this regard contribute to trust in the digital economy and to the development of trade.
  2. Nothing in this Agreement shall prevent a Party from adopting or maintaining measures on the protection of personal data and privacy, including with respect to cross-border data transfers, provided that the law of the Party provides for instruments enabling transfers under conditions of general application for the protection of the data transferred.
  3. Each Party shall inform the other Party about any measure referred to in paragraph 2 that it adopts or maintains.

The compromise between the position of the parties reflects the difficulties in translating drafted horizontal provision into real negotiations. What is clear is that the all-encompassing, condition-less provision that the Commission had envisioned did not come to fruition. In the first paragraph data protection is no longer a fundamental right, something that is striking among purists and puts the protection of data legally on a lower pedestal than if it would have remained a fundamental right.

Moreover, in the draft provision, paragraph two gives both parties full authority over the adoption of safeguards, with no conditions attached. In contrast, the adopted TCA’s provision is worded quite differently: ´nothing in this agreement shall prevent […] provided that´ instead of ´Each party may adopt´. This gives the paragraph a negative wording with again some conditions attached. It bears a resemblance to the GATS article meaning that it would not be without conflict and possible dispute. (WTO Analytical Index, GATS – Article XIV (Jurisprudence))

Because of the transition period, under the agreement data flows are still unrestricted as long as the UK continue to apply the data protection rules, based on EU law (EU-UK Agreement part seven, Article FINPROV. 10A(4)). Moreover, with a pending adequacy decision, a large differentiation between the UK and EU data protection is not likely to arise. When the data protection in the UK is deemed to be adequate the article will become moot.

However, this quite substantial modification from the original proposal by the EU does show that the EU might be flexible on the wording of such rules. In the TCA case, the EU position is explained by the special and interconnected relationship with the UK, a European country and a former Member State. Nonetheless, it is interesting that the EU Commission did accept different draft provisions, although it had defiantly stated that those provisions would not be subject to negotiations.

In the future the EU will strive to include such horizontal provision in all future trade deals. Indeed, in the trade negotiations with Australia and New Zealand the provisions proposed are again mirroring the draft provisions. With New Zealand already having received an adequacy decision from the Commission, the question remains if a horizontal provision is a priority for both parties. Considering New Zealand’s ´culture of compliance´ (Henning, 2020) data protection will not be a major hurdle and one can expect the horizontal provision to be included in the upcoming EU-New Zealand trade deal without significant amendments.

For countries without this close connection to the EU data rules, such as Australia, the inclusion of such broad horizontal provision could be problematic. Third countries have the reasonable worry that such blanket exception could be used for ´otherwise unjustifiable IT and data localization requirements´. (Yakovleva & Irion, 2020, 219)

The provisions in the Australia and New Zealand deals will give a clearer idea on what these new horizontal provisions mean for EU trade negotiations and deals. It seems however that the Commission’s position on the matter is far more practical and reliant on adequacy decisions, thus unilateral, than it presents to be at first glance. The full regulatory autonomy that the EU strives for has not been achieved in the TCA and will thus most likely not be achieved in future trade deals. A missed opportunity.

 

*David Scholte is a Junior Lecturer in EU Law at Utrecht University, the Netherlands. He is also currently pursuing a Master in International Relations at Leiden University.

Can Legitimate Interests Ground Justify Web-Scraping of Personal Data for Direct Marketing Purposes under the GDPR?

by Ali Talip Pınarbaşı, LLM

 

WHAT IS DIRECT MARKETING? HOW IS WEB-SCRAPING USED FOR DIRECT MARKETING?

 

As grabbing the attention of the customers became harder by  digital advertising, reaching out to customers directly has become more vital for businesses. Examples of such  direct communication includes cold-calling, cold-emailing, postal mail and point of sale marketing. All these methods constitute direct marketing.

The distinguishing feature of direct marketing is that the prospective customer does not initiate a communication; the first step is taken by the seller and the seller usually calls on the customer to take a certain action such as subscribing to newsletters or making a purchase.

Every direct marketing campaign, be it via email marketing or telemarketing, requires access to vast amounts of contact data of customers such as e-mails and phone numbers.

However, such contact data does not magically appear on the databases of the marketers, so they need to extract such data from various sources including websites and online directories.

This is where the web-scraping methods come into play: web-scraping is a technology used to extract the contact details of individuals from websites and online directories. Following the extraction of these data, the marketers then contact individuals to promote their products/services.

For example, an insurance company may want to advertise its new car insurance product to people who have been in car accidents before. To send e-mails or make calls to those people, the insurance company will have to collect the contact details of these individuals. This company can use web-scraping technology to collect their contact details.

 

LEGITIMATE INTERESTS CAN BE THE LEGAL BASIS FOR SCRAPING OF PERSONAL DATA FROM THE WEB FOR DIRECT MARKETING PURPOSES

When the data-controller extracts personal data from the websites or directories, it is likely that she does not have the consent of the data subjects. Therefore, data controllers must justify their scraping activity under another lawful basis for processing of personal data, which will inevitably be the ‘legitimate interests’ basis.

However, it is quite common to come across an article on the internet which posits that GDPR completely prohibited web-scraping and unless there is consent, the processing is unlawful and will lead to hefty fines.

One recent example supporting this prevalent view is French Data Protection Authority’s(CNIL) guidance which rejected the possibility that legitimate interests can justify scraping of personal data. The reasoning behind this position is that data subjects do not expect to receive direct marketing communications from a third-party data controller when they share their personal data with a data controller.

In other words, the Guidance rejects the reliance on legitimate interests ground to justify we-scraping based on one single criteria: the expectations of the data subject.

However, as will be explained below, legitimate interests assessment cannot be reduced to a single determining criteria because it requires taking into account all factors and circumstances.

The following reasons demonstrate why the legitimate interests ground can be used to justify web-scraping.

 

  1. Scraping of personal data from the web is a separate processing activity subject to GDPR and it is distinct from the direct marketing activity itself.

 

Consider a data controller who scrapes personal data from the web and then use this data for direct marketing purposes such as sending cold e-mails to individuals. In this scenario, both the scraping activity and cold e-mailing are two separate processing activities subject to GDPR, and both have the same purpose: direct marketing.

As the scraping of personal data is done for direct marketing purposes, GDPR’s rules for processing of personal data for direct marketing purposes should apply to this scraping activity.

Recital 47 of GDPR states that “[t]he processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest.”

Considering the GDPR’s approach, rejecting the reliance on legitimate interest ground to justify web-scraping for direct-marketing purposes seems like a bizarre result which does not align with the wording of GDPR.

 

  1. Data controller has the discretion to conduct legitimate interest analysis to justify web-scraping, GDPR does not categorically exclude web-scraping of personal data.

 

Stating that the web-scraping can only be justified on the basis of consent makes web-scraping activities completely illegal under the GDPR, as the consent is almost practically impossible to obtain in web-scraping activities. In other words, rejecting the reliance on legitimate interests means prohibiting a data processing activity that the GDPR did not prohibit.

To the contrary, GDPR explicitly states that processing of personal data for direct marketing purposes can be lawful based on legitimate interests. If the purpose of a web-scraping activity is direct marketing, then it does not make sense to say that consent can be the only lawful basis to justify the scraping activity.

Therefore, the data controller should be able to rely on legitimate interests basis to justify its web-scraping activity.

This of course does not guarantee that the web-scraping activity will be considered lawful in every circumstance. Web-scraping activity can still be unlawful if the conditions for legitimate interests are not satisfied.

Since we established that legitimate interests can justify web-scraping, now let’s look at how it would be applied in practice.

 

APPLYING THE LEGITIMATE INTERESTS TEST TO WEB-SCRAPING FOR DIRECT MARKETING

Legitimate interests test requires a balancing exercise where the interests of the data controller will be weighed against the rights and freedoms of the data subjects. While doing this balancing exercise, all factors and circumstances should be taken into account.

This balancing exercise can be exercised by applying a three-step test:

  1. What are the legitimate interests of the data controller ?

In such a competitive business environment, reaching out to potential customers to promote  its products and services are vital for every business.  Therefore, collecting the contact details of individuals to contact them for direct marketing purposes serves the commercial interests of the data controller. Two examples can be given for these commercial interests.

Firstly, web-scraping for direct marketing purposes cost far less compared to traditional marketing methods or running ads on digital media platforms. This is particularly true for small and medium-sized businesses which have a very limited marketing budget and have difficulties in reaching their target customers.

Secondly, web-scraping can be effective in finding a specific group of customers who might be more likely to engage with the business. For instance, web-scraping can help the business market its products/services to a particular group of people who belong a certain age group or who live in a specific region.

  1. Is web-scraping necessary?

This step require investigation into whether there are less intrusive ways to achieve the goal of marketing.

This will vary depending on the particular industry in which the business operates and the availability of other methods to reach customers as well as the impact on the privacy of the data subject.

For instance, if the data controller is planning to promote its farming equipment to farmers by cold e-mail or cold calling after scraping their contact information, this may pass the necessity test because this may be the most convenient way to reach the customer. This may be because it is almost impossible to reach the farmers on traditional media outlets or by running ads on digital platforms.

  1. Does individual’s interest override the interest of the data controller ?

This step requires a balancing exercise between the two sides. Following factors should be considered in this weighing exercise:

-If the potential privacy impact of the web-scraping on the individual is high, this may tip the balance in favor of unlawfulness of the web-scraping,

-Sensitive character of data,

-Reasonable expectations of the customer,

-Degree of intrusion of the processing.

Depending on the specific circumstances of the case, the result of the balancing exercise will differ.

For instance, let’s imagine two different scenarios where the personal data are scraped from the web for direct marketing purposes.

Scenario 1: Company A scrapes the e-mail addresses of thousands of high school students to promote its math course materials to them via cold emailing. However, it takes appropriate security measures on the data such as encryption and pseudonymization and does not share this data with third parties. Furthermore, it does not send spammy e-mails to each person, but it only selects a small number of relevant students to promote its products.

Scenario 2: Company does the same scraping activity as company A, but it does not apply the relevant security measures and shares the scraped data with third parties.

Comparing these two scenarios, it is crystal-clear that the privacy impact of the A’s scraping activity is almost minimal on individuals whereas the B’s scraping is likely to expose the personal data of the data subject to high-risk.

As can be seen, every web-scraping for direct marketing purposes has different implications on individuals and justifying them on the basis of legitimate interests requires a case-by-case analysis.

CONCLUSION

Legitimate interests ground can justify web-scraping of personal data for direct marketing.

While doing the legitimate interests analysis, all factors and circumstances should be taken into account such as privacy impact on the individual, commercial interests of the web-data controller and necessity of web-scraping instead of just focusing on one criteria such as expectations of individuals.

 

About the author

Ali Talip Pınarbaşı is a Legal Consultant based in Istanbul. He provides legal consultancy services on IP Law and Data Protection Law. He completed his LLM Degree in King’s College London, specializing in IP&IT Law.

Location privacy and data retention in times of pandemic and the importance of harmonisation at European level

Patrícia Corrêa

In this time of pandemic, many countries are starting to actively monitor cellphone data to try to contain the spread of the new coronavirus. Governments are using location data to trace contacts or monitor and enforce quarantine of persons who have tested positive for COVID-19 or those with whom they have come into contact with.

The United States’ Government is in discussions with the tech industry about how to use Americans’ cellphone location data to track the spread of the novel coronavirus. In Iceland, authorities have launched an app that tracks users’ movements in order to help tracking coronavirus cases by collecting data about other phones in the area. In India, state authorities have also launched an application to track the movement history of persons tested positive, also providing the date and time of the visit to spots by the patients. In Brazil, at least one city is already using cellphone data to monitor gathering of people and take action to disperse them and soon federal government will follow. There are reports of similar approaches in many other countries as well.

At European level, Internal Market Commissioner Thierry Breton has held a videoconference with CEOs of European telecommunication companies and GSMA to discuss the sharing of anonymised metadata for modelling and predicting the propagation of the virus.

Does this approach necessarily put data privacy at risk? Is the trade-off between data privacy and public health necessary? Whereas it is true that in exceptional circumstances fundamental rights need to be balanced against each other, data privacy shall not be an insurmountable obstacle to the implementation of exceptional public health policies.

Some basics on data and metadata

Simply put, data consists of potential information that has to be processed to be useful. [1] Metadata, on the other hand, is “data about data”, comprising all the information about data at any given time, at any level of aggregation. It is structured information about an information resource of any media type or format. [2]

In order to safeguard privacy, personal data must be anonymised before its processing. Anonymisation refers to the process of de-identifying sensitive data while preserving its format and type [3] so it cannot be tied to specific individuals. Privacy can be also be assured by means of aggregation, which refers to the “process where raw data is gathered and expressed in a summary form for statistical analysis.”

Conditions for the use of location data

While in some countries the use of information to combat the COVID-19 outbreak seems to go beyond anonymised data (individual location and contacts tracking, for instance, requires device-level data), in Europe, so far, collaboration between telecommunication companies and governments appears to encompass only the exchange of anonymised data or databased models. On that level of data processing, the European Data Protection Board issued an approval statement based on some conditions, such as the anonymity of the processed data and the applicability of administrative controls, including security, limited access and limited retention periods.

On April 8, the European Commission issued a Recommendation on a Common Union Toolbox for the Use of Technology and Data to Combat and Exit from the COVID-19 Crisis, in particular concerning mobile applications and the use of anonymised mobility data. The Recommendation acknowledges the value of digital technologies and data in combating the COVID-19 crisis stating, however, that fragmented and uncoordinated approaches could hamper the effectiveness of measures aimed at combating the pandemic and violate fundamental rights and freedoms. It sets up a process for developing a common approach (Toolbox) to use digital means to address the crisis. The Toolbox will consist of practical measures for making effective use of technologies and data, with a focus on a pan-European approach for the use of mobile applications, coordinated at Union level and a common scheme for using anonymised and aggregated data on mobility of populations.

Regarding the use of mobility data, the Recommendation provides, inter alia, for safeguards to be put in place to prevent de-anonymisation and avoid reidentifications of individuals, including guarantees of adequate levels of data and IT security, and assessment of reidentification risks when correlating the anonymised data with other data.

The right to location privacy

According to the Article 4(1) of the GDPR, personal data comprises any information relating to an identified or identifiable natural person, including location data. Location data, as stated by the ePrivacy Directive, means any data processed in an electronic communications network or by an electronic communications service, indicating the geographic position of the terminal equipment of a user of a publicly available electronic communications service. It can be tied to a known individual (e.g. a name linked to a cell phone subscription) or to an identifier associated with a specific device (anonymised data). In other cases, a dataset is modified to display the location of groups of people, instead of individuals (aggregated data).

Location privacy, hence, relates to the location information of an individual in a sense that prevents others to learn about one’s current or past location. [4] In other words, “This definition captures the idea that the person whose location is being measured should control who can know it.”

The right to location privacy encompasses two fundamental rights, both guaranteed by the Charter of Fundamental Rights of the EU: the respect for private and family life (Article 7) and the protection of personal data (Article 8). Notwithstanding its importance, fundamental rights are not absolute and can be restricted in exceptional situations. As stated by Article 52(1), restrictions on these rights can only be imposed when lawful, legitimate and proportionate.

Location privacy is also protected under the Article 8 of the European Convention on Human Rights and cannot be limited either, if not for derogation in time of emergency consisting of war or other public emergency threatening the life of the nation. In that case, the measures shall be taken strictly to the extent required by the situation and cannot be inconsistent with other obligations under international law (Article 15).

Data retention in EU context

In Digital Rights Ireland case, the ECJ declared the invalidity of the Directive 2006/24/EC, which required providers of publicly available electronic communication services or public communication networks to retain telecommunication data of individuals for the purposes of preventing, investigating and prosecuting serious crime. The ECJ took the view that the Directive does not “provide for sufficient safeguards … to ensure effective protection of the data retained against the risk of abuse and against any unlawful access…” According to the ECJ, although the Directive satisfies a valid objective of general interest (public security), it does not meet the principle of proportionality.

To date, there is no EU legislation regarding data retention. Filling up the void, the ECJ decided in Tele2 Sverige case on the scope and effect of its previous judgment on Digital Rights Ireland, establishing minimum safeguards that must be included in any national law regarding data retention. ECJ therefore concluded that national legislation that did not contemplate minimum safeguards would be precluded pursuant to Article 15(1) of ePrivacy Directive.

Despite the guidelines set out in the Tele2 Sverige judgement, a survey by Privacy International indicates that, as of 2017, a large number of Member States still had not yet made necessary changes to ensure national legislation compliance. This is especially important in this time of pandemic, as many States in Europe are recurring to private telecom companies to disclose retained location data in order to fight the COVID-19 outbreak.

Data retention and location privacy: the need for harmonisation

This scenario highlights the importance of harmonisation on the subject at European level, what would contribute to safeguard citizens’ privacy rights. That coordination between private companies and governments shall reveal how access to sensitive telecommunication data by public authorities will affect the retention of data for private purposes.

In the light of the COVID-19 pandemic, location data can be very useful for epidemiological analysis, medical research and measures against disease spread. This importance, however, does not preclude the respect for privacy rights. In that context, a European framework for data retention is paramount to location privacy, since it can effectively regulate what data can be retained, for how long, and what measures must be taken in order to reduce violations risks and making it is being stored and shared in legitimate and responsible ways.

Final remarks

The retention, processing and exchange of location data to handle the pandemic do not necessarily have to violate privacy. There are mechanisms that, although not infallible, minimise risks of breach in the processing of personal data, in particular aggregation and anonymization. Besides, even in exceptional cases in which personal identifiable information processing is needed, EU Regulation and case law have already set some boundaries, especially amounting to proportionality. What really matters is the approach authorities will choose to take after the outbreak subsides, so mass surveillance does not become the norm.

[1] POMERANTZ, Jeffrey. Metadata. Cambridge : The MIT Press, 2015. p. 21.
[2] BACA, Murtha (ed). Introduction to Metadata. 3. ed. Los Angeles : Getty Research Institute, 2016. p. 2.
[3] RAGHUNATHAN, Balaji. The Complete Book of Data Anonymization: From Planning to Implementation. Boca Raton, FL, USA : CRC Press, 2013. p. 4.
[4] ATAEI, Mehrnaz; KRAY, Christian. Ephemerality is the New Black: A Novel Perspective on Location Data Management and Location Privacy in LBS. In GERTNER, Georg; HUANG, Haosheng (ed. ) Progress in Location-Based Services 2016. Switzerland : Spring, 2017. p. 360.

 

The Author

Patrícia Corrêa is a Portuguese qualified lawyer currently pursuing a Master’s Degree in International and European Law at Universidade Católica do Porto, Portugal.