The General Data Protection Regulation(GDPR), viewed from one perspective, is an attempt by the EU to place the citizen – the individual – at the heart of decision-making in respect of their own data.
EU law places great emphasis on the rights and freedoms of the individual against the rights of businesses to make money from perceived infringement of those rights.
In regulatory terms, GDPR is in some ways a simplification of what has been done before – the regime around international transfers is broadly the same – while the filing of both contractual clauses and binding corporate rules is actually easier, as one “authority” will be responsible. Previously companies would have to speak to the regulator in each EU jurisdiction, and file BCRs for inspection by three regulators, which built in significant delays.
The factor drawing considerable conversation is the size of the fines – 4% of global turnover – which is significant to any corporation. Where Data Protection was a matter of risk management that could be viewed by some less-scrupulous organizations as a cost of doing business, this is no longer possible. The recent (non-privacy) case against Google resulted in a €2.3billion fine, and as the EU’s history with Microsoft indicates, where EU enforcement agencies or courts have teeth, they’re not afraid to use them.
Which brings up the question: Is it coincidence that the GDPR will have the greatest effect on non-EU companies that use data in the ways described above? The answer is no.
The EU would argue with some significant justification that the technologies being developed allow companies and governments to have a far more detailed picture of the individual than most citizens understand which could impact a whole host of areas of their lives. It is therefore correct, in their view, that they act to protect EU citizens at large.
From a different perspective, it may appear that data protection is actually data protectionism—a way of controlling the flow of a commodity in which the EU currently runs a huge trade deficit.
As we are currently seeing with Canada and Privacy Shield in the US, both of which are deemed adequate by the EU currently, the EU is starting to challenge jurisdictions based on the whole of the legislative environment. This includes the government’s use of big data and surveillance technology, and the protections afforded to their own citizens. So, is this legitimate protection of the rights of EU nationals whose data is processed abroad, or a way of seeking to restrict or block the ability of non-EU countries to access the data market and address the data trade deficit?
This will come into sharp focus, particularly for UK firms over the next two years as the UK is a huge supplier of data services across Europe. Post-Brexit, there is little certainty about the legal status of the UK under GDPR. While the UK is firmly committing to implementing GDPR, that not be enough to provide it with an instant Adequacy funding, which would be required to maintain data flows in an easy and uninterrupted way.
In terms of the impact on companies’ use of technology for their own protection, this may play out in the courts. The recent Article 29 Working Party on processing of employee data establishes some points of concern, for example commenting on the use of TLS encryption as an invasive technology. There is always a balance that needs to be struck between protecting individuals’ privacy and the protection of the data that companies possess on individuals. The regulation permits processing of data where necessary to prevent and detect crime, but the Article 29 WP paper does not seem to take into account that the only way to do this effectively is to monitor system usage at a detailed level to prevent threats coming in and data illegally flowing out (the insider threat).
This issue needs to be discussed soon; otherwise protections to secure individuals’ privacy may undermine the ability for individuals (and companies) to help prevent their data from being stolen. It may also be that, in the case, the EU is driven to accept greater use of security technology in order to continue doing business in other markets. An interesting dilemma for sure.
In the long-run it may be that the EU decides that it is better to have automated surveillance by dispassionate machines to protect the ‘greater good’, than risk large-scale data breaches. The only question would then be who runs the machines?