Europe has AI regulations: Should the US follow suit?
Artificial Intelligence

Europe has AI regulations: Should the US follow suit?

This is a contributed piece by David Murray, CMO and Chief Business Development Officer at Corvil

We humans believe we have a good understanding of both time and causality, and we do – at human scales. A bowling ball hits the seven pin in a seven to 10 split, which ricochets to hit the 10 pin. Because we know the order of these events, we know the 10 pin could not have caused the seven pin to fall (or the ball to be bowled, for that matter). 

Now imagine that all these events happened so fast they appeared to be simultaneous, in a millisecond or so. In the blink of an eye, the bowling ball is gone and the lane is clear. Could you say, with certainty, what caused what? Now, what if every pin and every bowling ball in the entire alley disappeared in that same millisecond? Establishing any chain of causation would be near impossible, at least for a human.   

This is a rudimentary image of "machine time" – events happening so fast that they might as well be at the same time, at least by human perception. The complexity is greater, having more historical data and hundreds or thousands of live inputs, and so are the stakes with self-driving cars, auto-pilot systems, and medical diagnoses. And that is exactly the reality that widespread AI will eventually bring to any part of life that is touched by technology – and that is most of it.  

The question of how powerful AI will be regulated is one of the most pressing of our modern age, up there with how will we provide food and energy to the next billion humans, whether we'll make it to Mars after all, and who will end up on the Iron Throne. New giants of industry and intellect from Zuckerberg to Hawking to Musk have weighed in on what AI will mean for humanity. Some are more optimistic than others. All would agree that we need, sooner than later, strong and comprehensive guidelines that allow humans to understand and, as much as possible, control and guide the actions of very powerful "machines."

In Europe, a precursor to regulation for powerful AI took effect January 3, 2018. The Markets in Financial Instruments Directive 2 ("MiFID II") represents one of humanity's earliest efforts in understanding and monitoring autonomous "machines". The purpose of the regulation is to increase transparency and curb abuse in financial markets. Among many other things, MiFID II sets out some very strict guidelines about how precise financial firms need to be in their timestamping and recordkeeping of algorithmic trading events (one microsecond or better), and within 100 microseconds accuracy to Coordinated Universal Time (UTC).

So why does this matter? The EU's financial regulatory body is effectively asking for the highest resolution view yet of a business that is now almost completely done by machines. Those machines "think" through complicated algorithms and perform thousands or even millions of actions per second based on a high volume stream of inputs ("market data" across thousands of instruments and metrics). No human can stand any chance of keeping up, and oftentimes cannot understand what has transpired, even in retrospect. 

In the very near future, all digital business will be driven by similar machines acting at similar speeds. Already, algorithms decide what advertisements we see, how flights are priced, and, more infamously, everything we see in our various social media feeds. Already, some of the most sophisticated companies in the world have trouble figuring out (and explaining) why their algorithms – which are basically precursors to AI – do the things they do and act the way they do. Our planes are flown by algorithms, AI fuels self-driving cars, and marketing through facial recognition is no longer science fiction. The governance, auditability, accountability, and security of these systems going forward will be imperative if we want a society that continues to function.

The first step in understanding why something happened is understanding how it happened. Back to the bowling alley: let's say I can give you a timestamped list of every bowling ball thrown, its rate of velocity and spin, where it touched every pin, and the resulting impact to each pin in that establishment since the beginning of time. Reconstructing the order of events and establishing what caused what becomes a whole lot easier (although it can still be a lot to sort through). We would then be able to pick out some of the anomalies: if a ball jumped a lane and took out some pins it wasn't supposed to, or if a lane was cleared early.  

When machines run just about all of our digital lives, and their decisions are more and more independent and beyond human reproach, we will need this visibility into their inner workings more than ever. MiFID II aims to protect the financial system, or at least establish a system through which to efficiently figure what happened when things go wrong. Similar regulations with similar levels of demands on record-keeping, accuracy, intent, and the ability to reconstruct very, very fast events will be needed soon and everywhere. 

Clearly, these types of technical guidelines do not solve for the myriad ethical consequences that will arise from the widespread use of superhuman, algorithmic decision-making. But they are the necessary first step. Without a detailed enough view into the "thought processes" of the machines, we will end up with a world of AIs that are effectively "black boxes" that offer little to no visibility into what they are doing and why. This will make answering any question about if things went wrong (and why) very difficult, if not impossible.

Asimov foresaw the need to govern AI with his Three Laws of Robotics, which seem increasingly relevant and insightful. The debates over AI regulation are just getting started, but two things are clear: 

One: The foundation of AI regulation is, systems of extremely high time precision to ensure visibility into, and oversight of, algorithmic and autonomous environments. 

Two: Without such a foundation, we risk becoming blind actors in a reality that we cannot validate, comprehend or hold accountable and gives way to a new age existential debate over fate vs. free will.

There is lots of talk about the big picture: figures like Hawking, Musk, and Zuckerberg have pushed the loftier points of the debate into the public forum. At its very core, this is a physics problem of action and reaction. It begins with human intent, which quickly becomes obfuscated as systems evolve themselves. Foundationally, we require the auditability, accountability and oversight in real-time to understand what is happening, why it is happening, and evaluate that relative to the original intent to identify anomalies. It is only through this oversight that we can hope to provide protections – both against unintended consequences and against cyber-disruption.  

In the absence of that, we will never see anything more than the score of each frame or the game as a whole. 

PREVIOUS ARTICLE

«Why momentum is building for tech M&A in Europe

NEXT ARTICLE

How to generate more business value from your IT infrastructure»
author_image
IDG Connect

IDG Connect tackles the tech stories that matter to you

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?