Following her appointment as president of the European Commission in November 2019, Ursula von der Leyen set out to accomplish a number of initiatives. In particular, von der Leyen tasked Margrethe Vestager, the EU's executive vice president for a Europe fit for the digital age, with issuing an initial proposal for artificial intelligence policy—within 100 days.
Yesterday, the first of these documents was published on the Commission’s website. This follows various meetings in Brussels between the European Commission and senior representatives of internet giants Apple, Facebook, and Google; it seems more than likely the subject of AI came up during the course of those meetings. Are these businesses nervous about the possibility of new legislation that might impact their use of AI? Quite possibly.
The EU has a track record of leading with legislation to tackle emerging risks, such as the General Data Protection Regulation (GDPR), effective as of May 25, 2018; the regulation was followed by the recent California Consumer Privacy Act (CCPA), effective Jan. 1, 2020.
The privacy software industry has been turbocharged by these legislative acts; G2’s Merry Marwig, our lead analyst for privacy, is closely following developments; and we recently launched a new raft of data privacy technology categories to more accurately reflect the tech landscape.
It seems highly likely that frameworks and guidelines will emerge from Brussels, and the European Commission’s White Paper on Artificial Intelligence discusses this potential need, setting out criteria for understanding the level of risk an AI use case may present. The paper highlights possible requirements for these “high-risk applications,” which include:
- Training data
- Data and record-keeping
- Information to be provided
- Robustness and accuracy
- Human oversight
- Specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification
The paper goes into further detail on each topic, setting out the possibility of “conformity assessments” as part of the section of Compliance and Enforcement (page 23). The sentence that could concern non-European-based operators of AI may be this:
“The conformity assessments would be mandatory for all economic operators addressed by the requirements, regardless of their place of establishment.”
Silicon Valley, in just the same way it is subject to GDPR, could potentially have its AI activities subject to EU compliance oversight.
Beyond the regulations: AI usage in the future
Another aspect of the Commission's release caught my eye: The matter of what I have generally talked about as the data "first mover" problem. In other words, internet giants have an almost insurmountable head start when it comes to data they have gathered that may be used to train and operate AI.
Interestingly, Ms. Vestager was previously the commissioner for competition, and another of the papers published yesterday--Communication: A European strategy for data--makes specific mention of what it terms “imbalances in market power,” on page 8. This advantage, alongside that of established technology infrastructure, is a key factor when considering exercisable market power—both in the delivery of goods and services and AI use cases. It is perhaps also equally worrisome for both legislator and technology company alike.
It has often been said that data is the new oil. (As a student of economic history, I cannot help but think of the breakup of Standard Oil in 1911.) Legislative frameworks for governing AI is a very promising topic, but I wonder if controls over the data used by AI will define much of the ultimate role it plays. Needless to say, G2 will be keeping close watch of developments in this space.