Computer says ‘No...Yes...GDPR’

05 Dec 2017

  • Business intelligence
  • Leadership
  • Data

Next year the General Data Protection Regulation (GDPR) adopted by the European Parliament comes into effect and will impact any organisation across the globe processing personal data of any EU citizen. Much of GDPR relates to collection, storage and use of personal information, however for those in data science and analytics the most interesting aspects of GDPR are those contained within Article 22: Automated individual decision-making, including profiling.

Article 22. Automated individual decision making, including profiling
1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:
(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(c) is based on the data subject’s explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) apply and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place

Implications for algorithmic decisioning, predictive analytics and artificial intelligence/machine learning continue to be debated around issues of built-in bias (contravening the non-discrimination aspects of the regulation) and provision of meaningful information about the logic involved.

How then do you explain an algorithm’s decision? Or more so, the workings of machine learning and the outcome received? Ethical and philosophical considerations come to the fore.

As you contemplate these implications, the following articles and viewpoints may help fuel your thinking.

Transparency of machine-learning algorithms is a double-edged sword

European Union regulations on algorithmic decision-making and a “right to explanation”

GDPR – sounding the death knell for self-learning algorithms?

The regulatory future of algorithms

EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence

Is there a 'right to explanation' for machine learning in the GDPR?