Who wants to play by the rules? Why self-regulation and government intervention are at a crossroads

Can tech firms be trusted to know what's best for users?

For some politicians and business leaders, regulation is a dirty word. The arguments generally go that regulation is bad for economies as it stifles innovation and growth. Too much red tape and bureaucracy, too much government intervention. Triumphalist, propaganda-like statements from the White House in the US last October, and a bizarre Economic Report of the President 2020 have done little to clarify the argument, either for or against. Claims that the Trump administration's repeal of the net neutrality rules will increase real incomes by more than $50 billion per year and consumer welfare by almost $40 billion per year seem ridiculous. There is no evidence that de-regulation in the US has or will improve broadband prices or services, either now or in the future.  

The problem is that stories like this creates a false notion of regulation, what it's intended to do and its short and long-term impact. Is it good or bad for business? Should regulators take a more prudent approach to new technology or should they lay out a few ground rules early on in development cycles to avoid any later damage to consumers and businesses?

There is no one catch-all answer but what is becoming increasingly clear is that established big tech companies are not exactly covering themselves in glory when it comes to self-regulating. This is one of the key reasons behind the UK government's decision to empower watchdog OFCOM with regulating internet companies against harmful content (something which Germany and Australia have also done). This announcement in February seemed fair enough on the surface - who can really argue against shutting down abusive, illegal content? - but when it gets to harmful but not illegal content, it gets into grey areas.

For Ben Derrington, legal director at law firm Ashfords, this is problematic and, in some ways, fuels the critics of regulation.  

"What is notable about the UK proposals is their breadth, which is expressed to address not just illegal content but ‘harmful' content," he says adding that issues will arise if companies fail to create terms and conditions which are specific and accurate enough to allow breaches to be clearly identifiable for OFCOM.

"In response, will OFCOM be tempted to heavily prescribe specific terms and conditions for companies to publish?" asks Derrington. "This would produce a conflict with the Government's desire to protect freedom of expression. As such, the efficacy of regulatory enforcement is particularly reliant on the co-operation of those companies caught by their regulatory powers."

It's an interesting point because it highlights the complexities regulators face in policing regulations. We will of course have to wait and see whether OFCOM has got it right but as Derrington points out, it's a difficult line to walk to get the balance right.

"As with all legal frameworks for risk control, compliance will either need to be ‘strict' where all breaches are punishable, no matter what the circumstances," he says, "or the duties on businesses will need to be defined by what the regulator deems to be ‘reasonable', which will involve the production of detailed codes of practice and guidelines. The first approach will be criticised as draconian and will lead to over-defensive measures by controllers of content, the second approach is prone to very significant difficulties in producing definitions that are politically acceptable; proportionate for all operators and sufficiently specific that they can be effectively enforced."

Stifling innovation in AI?

It will be interesting to see whether these regulations do curb harmful content in the UK (and Germany and Australia) but one area which has already been the subject of much regulatory debate is AI. While online content regs are primarily but not exclusively targeting big tech platforms such as Facebook and Google, any AI regulation will impact a much broader range of businesses.

At the moment, there are a number of guidelines and ethical frameworks floating around but most of these are just that, guidelines. Talk of regulation tends to be met with concern. In February, the European Commission released a paper outlining a plan for AI, one that for the first time courts the idea of regulation.

The White House in January had already raised concerns, outlining its own proposals to govern the development and use of AI. It called on Europe to avoid "heavy handed innovation killing models," while encouraging US Federal agencies to uphold the "safe and trustworthy creation and adoption of AI technologies."

The inference here is that regulation would stifle innovation and competitiveness, and that to a large extent, there has to be self-regulation within Government inspired frameworks of ethics and interoperability. Great if everyone plays the game but we know from history that self-regulation rarely works to the benefit of the masses. So, is Europe getting it right or wrong on AI?

According to Jerry Levine, global general council and corporate secretary at leading AI firm IPsoft, increased regulation does create burdens, but also opens the path for innovation.

"What's best for innovation and growth is ensuring that we actively support research and development, both private and public, into AI technologies," says Levine. "The EU's position is actually quite beneficial at the moment, where they are attempting to balance the needs of the public with the advancement of technology (training, recordkeeping, oversight, information provision, robustness, accuracy, and the like)."

It's an interesting point from a company that has been working with AI in developing its cognitive agent Amelia. So, does Levine expect to see regulation soon, what will it look like and how will it impact development?

"I believe that we will see greater regulation and greater government interest in protection of citizens' rights: consumer, safety, and fundamental human rights. The issue is not will AI be regulated, but how will it be regulated?" says Levine.

He suggests that much of what will be regulated is already to some extent covered by regulation, such as healthcare, finance, motor vehicles and consumer product safety. He believes that regulations will emerge but very much in the form of vertical laws we already see today.

 "Where we will continue to see regulation and development of new regulations and legislation is in those "classically" regulated areas, but also in employment decisions, financial / lending decisions, advertising, data collection, and so forth," he adds. "Additionally, the use of AI applications for biometric and surveillance will lead to additional regulation as well—looking at both the UK and the EU statements, preservation of rights is key for both jurisdictions."

It's ultimately a balancing act but then, as is often the case, technology innovation and big business leads the way and regulation tends to follow, when the notion of self-regulation falls miserably on the heads of innocent individuals. This is not always the case of course but as we keep hearing, AI is a special case. It's going to change the world and to that end, perhaps, it needs special attention. Regulation does not necessarily stifle innovation. If anything, it should make the innovators think more carefully but the fear of course is that the AI horse has already bolted.

Related: