AI-powered emotion recognition, typing with thoughts or eavesdropping virtual assistants: three non-fictional examples illustrate how AI may impact society. AI-related products and services... Show moreAI-powered emotion recognition, typing with thoughts or eavesdropping virtual assistants: three non-fictional examples illustrate how AI may impact society. AI-related products and services increasingly find their way into daily life. Are the EU's fundamental rights to privacy and data protection equipped to protect individuals effectively? In addressing this question, the dissertation concludes that no new legal framework is needed. Instead, adjustments are required. First, the extent of adjustments depends on the AI discipline. There is nothing like 'the AI'. AI covers various concepts, including the disciplines machine learning, natural language processing, computer vision, affective computing and automated reasoning. Second, the extent of adjustments depends on the type of legal problem: legal provisions are violated (type 1), cannot be enforced (type 2) or are not fit for purpose (type 3). Type 2 and 3 problems require either adjustments of current provisions or new judicial interpretations. Two instruments might be helpful for more effective legislation: rebuttable presumptions and reversal of proof. In some cases, the solution is technical, not legal. Research in AI should solve reasoning deficiencies in AI systems and their lack of common sense. Show less
Children with intellectual disabilities have consistently occupied a marginalisedposition within the digital landscape. While they might reap benefits from usingdigital products and services, they... Show moreChildren with intellectual disabilities have consistently occupied a marginalisedposition within the digital landscape. While they might reap benefits from usingdigital products and services, they are also particularly vulnerable to online risks dueto cognitive deficiencies and lower critical literacy skills. As a result, children withintellectual impairments frequently encounter various predicaments, encompassingonline sexual solicitation, exposure to inaccurate information and manipulativebehavioural designs, as well as various privacy and data protection concerns.Regarding these challenges, utilising data protection by design under Article 25(1)GDPR can play a pivotal role in dismantling certain barriers. This approach allowsdigital service providers to take children’s evolving capacities into account andimplement technical and organisational measures appropriate for their age anddevelopmental stage. The aim of this article is to explore what challenges childrenwith intellectual disabilities might confront in the digital environment and examinehow data protection by design can help prevent or mitigate these challenges.Specifically, the article proposes a series of potential measures as the following: (1)presenting information transparently and in a manner accessible to children withintellectual disabilities; (2) crafting an inclusive user interface that enables thesechildren to actively and autonomously engage with the digital environment; (3)integrating interactive assistive tools, such as AI bots, to offer guidance and support;and (4) introducing parental control mechanisms that allow parents to provideassistance in alignment with their children's needs and preferences. Show less
Mechanisms to control public power have been developed and shaped around human beings as decision-makers at the centre of the public administration. However, technology is radically changing how... Show moreMechanisms to control public power have been developed and shaped around human beings as decision-makers at the centre of the public administration. However, technology is radically changing how public administration is organised and reliance on Artificial Intelligence is on the rise across all sectors. While carrying the promise of an increasingly efficient administration, automating (parts of) administrative decision-making processes also poses a challenge to our human-centred systems of control of public power. This article focuses on one of these control mechanisms: the duty to give reasons under EU law, a pillar of administrative law designed to enable individuals to challenge decisions and courts to exercise their powers of review. First, it analyses whether the duty to give reasons can be meaningfully applied when EU bodies rely on AI systems to inform their decisionmaking. Secondly, it examines the added value of secondary law, in particular the data protection rules applicable to EU institutions and the draft EU Artificial Intelligence Act, in complementing and adapting the duty to give reasons to better fulfil its purpose in a (partially) automated administration. This article concludes that the duty to give reasons provides a useful starting point but leaves a number of aspects unclear. While providing important safeguards, neither EU data protection law nor the draft EU Artificial Intelligence Act currently fill these gaps. Show less
Upon receipt of a right to be forgotten request, private actors like Google are responsible for implementing the balancing test between competing rights of privacy and data protection and free... Show moreUpon receipt of a right to be forgotten request, private actors like Google are responsible for implementing the balancing test between competing rights of privacy and data protection and free expression and access to information. This amounts to private jurisprudence that data subjects, lawyers, and interested parties could, theoretically, game to their advantage. This paper critiques this process and argues two separate, but related points. (1) Search engines have become the sole arbiter of the rights to privacy and data protection under Articles 7 and of the Charter of Fundamental Rights and Articles 8 and 10 of the European Convention of Human Rights, when safeguarding should be a responsibility of state authorities. (2) As private actors face litigation if their decision is not acceptable to the data subject, the right to access information and the public's right to know is compromised. Search engines exert considerable power over access to and Internet usage, yet nevertheless benefit from frameworks that permit a lack of adherence to similar human rights standards as public actors or agencies. As such, empowering search engines as decision-makers over conflicting fundamental rights is problematic. Rather than allow the content of the right to be forgotten to be fleshed out by private actors, the significant body of existing jurisprudence should form the basis for public guidelines on how to implement the right to be forgotten. An analysis of case law of national courts, the European Court of Human Rights and the CJEU reveals two related matters: it is possible to reverse engineer how search engines determine which requests will be actioned and those which will be denied. This paper argues a) collectively the body of jurisprudence is of sufficient standing to develop a public and transparent balancing test that is fair to all stakeholders and b) private actors should no longer be resolving the conflict between competing fundamental rights. The paper closes by positing a framework, loosely based on ICANN's Uniform Domain Resolution Procedure for resolving conflict between conflicting cyber property rights that provides transparency and accountability to the right to be forgotten and removes search engines as arbiters of the balancing test in select cases. Show less
The GDPR aims to control the risks associated with the processing of personal data. It requires measures to minimise these risks and gives data subjects certain powers, such as the rights to be... Show moreThe GDPR aims to control the risks associated with the processing of personal data. It requires measures to minimise these risks and gives data subjects certain powers, such as the rights to be informed and to be forgotten. Big data is a relatively new technology, giving the controllers of data the power to permanently observe the users of digital services. Therefore this thesis answers the question whether the GDPR is suited to avert the risks and power shifts associated with big data. To answer this question, the GDPR is compared to earlier EU legislation associated with technological risks and power shifts. Additionally, the suitability of the GDPR’s anti-discrimination provisions are evaluated for the prevention of algorithmic discrimination. Results: The GDPR is not based on any discernible analysis of the risks of big data. Methods from EU environmental protection law and consumer protection law, aimed at technological risks and power shifts, were not applied. This can make evaluation of the GDPR’s effectiveness more difficult and could stand in the way of developing a coherent body of case law. The conclusion proposes a number of guidelines for the decision of court cases and points for evaluating the GDPR. Show less
Today marks the first anniversary of the European Union’s General Data Protection Regulation(GDPR) coming into force. From news about Amazon’s Alexa listening to our private conversations to facial... Show moreToday marks the first anniversary of the European Union’s General Data Protection Regulation(GDPR) coming into force. From news about Amazon’s Alexa listening to our private conversations to facial recognition cameras installed in airports and taxis, the year since has been a steady drip of revelations about the data collection practices of big tech firms and breaches that have exposed the personal information of millions of data subjects. Behind the scenes though, reaction to the GDPR has been quite different. Businesses have struggled to come to terms with their obligations under the new law, while others have failed to conduct proper balancing tests between competing rights. National data protection authorities have been overwhelmed with complaints, queries, investigations, and enforcement proceedings. Opaque guidance from the regulator has not exactly made implementation easy. Show less
Simell, B.A.; Tornwall, O.M.; Hamalainen, I.; Wichmann, H.E.; Anton, G.; Brennan, P.; ... ; BBMRI-LPC Consortium FP7 GA no 2019
Web Privacy Measurement (WPM) has been established as an academic research field since 2012. WPM scholars observe websites and services to detect, characterize, and quantify privacy-impacting... Show moreWeb Privacy Measurement (WPM) has been established as an academic research field since 2012. WPM scholars observe websites and services to detect, characterize, and quantify privacy-impacting behaviors. The main goal of the research field is to increase transparency through measurement.Robbert J. van Eijk investigates the advertisements online that seem to follow you. The technology enabling the advertisements is called Real-Time Bidding (RTB). An RTB system is defined as a network of partners enabling big data applications within the organizational field of marketing. The system aims to improve sales by real-time data-driven marketing and personalized (behavioral) advertising. The author applies network science algorithms to arrive at measuring the privacy component of RTB. In the thesis, it is shown that cluster-edge betweenness and node betweenness support us in understanding the partnerships of the ad-technology companies. From his research it transpires that the interconnection between partners in an RTB network is caused by the data flows of the companies themselves due to their specializations in ad technology. Furthermore, the author provides that a Graph-Based Methodological Approach (GBMA) controls the situation of differences in consent implementations in European countries. The GBMA is tested on a dataset of national and regional European news websites. Show less
Our lives are increasingly intertwined with the digital realm, and with new technology, new ethical problems emerge. The academic field that addresses these problems—which we tentatively call ... Show moreOur lives are increasingly intertwined with the digital realm, and with new technology, new ethical problems emerge. The academic field that addresses these problems—which we tentatively call ‘digital ethics’—can be an important intellectual resource for policy making and regulation. This is why it is important to understand how the new ethical challenges of a digital society are being met by academic research. We have undertaken a scientometric analysis to arrive at a better understanding of the nature, scope and dynamics of the field of digital ethics. Our approach in this paper shows how the field of digital ethics is distributed over various academic disciplines. By first having experts select a collection of keywords central to digital ethics, we have generated a dataset of articles discussing these issues. This approach allows us to generate a scientometric visualisation of the field of digital ethics, without being constrained by any preconceived definitions of academic disciplines. We have first of all found that the number of publications pertaining to digital ethics is exponentially increasing. We furthermore established that whereas one may expect digital ethics to be a species of ethics, we in fact found that the various questions pertaining to digital ethics are predominantly being discussed in computer science, law and biomedical science. It is in these fields, more than in the independent field of ethics, that ethical discourse is being developed around concrete and often technical issues. Moreover, it appears that some important ethical values are very prominent in one field (e.g., autonomy in medical science), while being almost absent in others. We conclude that to get a thorough understanding of, and grip on, all the hard ethical questions of a digital society, ethicists, policy makers and legal scholars will need to familiarize themselves with the concrete and practical work that is being done across a range of different scientific fields to deal with these questions. Show less
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because...
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because this risk is not clearly defined.
Furthermore, the GDPR’s clauses on the processing of—and profiling based on—sensitive data do not sufficiently account for the fact that individual data subjects are parts of complex systems, whose emergent properties betray sensitive traits from non-sensitive data.
The algorithms used to process big data are largely opaque to both controllers and data subjects: if the output of an algorithm has discriminatory effects coinciding with sensitive traits because the algorithm accidentally discerns an emergent property, this may remain unnoticed. At the moment, there are no remedies that can prevent the discovery of sensitive traits from non-sensitive data.
Managing the risks resulting from processing data that can reveal sensitive traits requires a strategy combining precautionary measures, public discourse, and enforcement until the risks are more completely understood. Insights from complex systems science are likely to be useful in better understanding these risks.