This paper confronts assertions made by Dr Michael Veale, Dr Reuben Binns, and Professor Lilian Edwards in “Algorithms that remember: Model Inversion Attacks and Data Protection Law”, as well as... Show moreThis paper confronts assertions made by Dr Michael Veale, Dr Reuben Binns, and Professor Lilian Edwards in “Algorithms that remember: Model Inversion Attacks and Data Protection Law”, as well as the general trend by the courts to broaden the definition of ‘personal data’ under Article 4(1) GDPR to include ‘everything data-related’. Veale et al use examples from computer science to suggest some models, subject to certain attacks, reveal personal data. Accordingly, Veale et al argue that data subject rights could be exercised against the model itself. A computer science perspective, as well as case law from the Court of Justice of the European Union, is used to argue that effective machine-learning model governance can be achieved without widening the scope of personal data and that the governance of machine-learning models is better achieved through already existing provisions of data protection and other areas of law. Extending the scope of personal data to machine-learning models would render the protections granted to intelligent endeavours within the black box ineffectual. Show less