Can regulation speed up AI innovation

Over the last few months, US and European regulators have moved quickly to control the use of LLMs and AI algorithms in both public and private sectors. Most technologists see regulation slowing down the progress we have made in the last few years before the AI revolution was even able to start.

But what’s really at stake? And do businesses capitalizing on unprecedented consumer demand for AI have more than just their interests in mind?

EU AI Act Illustration Lots of regulation promising to protect the consumers

Regulating AI has many different stakeholders, but the largest one is society in general. Many jobs will be changed completely and new jobs will be created by the technological changes that come along with that. OpenAI for example hires a big workforce inhouse to fine-tune their foundational models with reinforcement techniques where the humans give the algorithms feedback which responses work best. Also, many companies are now hiring stand-alone prompt engineers in addition to prompt engineering becoming a part of most white-collar jobs.

The topic was not as relevant until late last year when OpenAI publicly released their latest GPT-3.5 model. The results of the model surprised everyone since nobody was expecting the technology to feel so human-like. Finally, the moment arrived where people were both in awe and scared about what might come next. This started a general discussion around the quick progress in the performance of generative AI and its impact on the society in general.

In the last decade most people thought that AI will first impact mechanical jobs by replacing them with robots, then influencing very repetitive digital jobs and only then coming for white-collar jobs and finally replacing software engineers. The reality turned out to be different. The best early use cases turned out to be the most highly-paid ones - code generation, summarizing/explaining legal texts, doing sales outreach or upselling to customers based on their recent activity. The quick adoption of the technology (ChatGPT reached 100 million users in 5 days) and its potential impact on jobs and privacy quickly started discussions amongst the regulators. That culminated in a Congress hearing with the CEO of OpenAI, Sam Altman, and two other technology leaders from the industry. The Congress went as far to suggest creating a new regulating entity that would be run by the CEO of OpenAI… It is very clear that regulation in the US is moving just as fast as the generative AI industry itself with the goal of protecting user harm. It is not a question of if, but a question of when the regulators will want to make sure that the technology will benefit the society as a whole, not just a few private research companies.

Besides the general society, there are also a lot of industry experts that are very aware of the rapid progress in AI and that are calling for more oversight and regulation to slow down technological progress. In March 2023, many famous experts and entrepreneurs signed a petition to stop the research on state of the art AI models in order to put better guardrails in place to reduce the risk of building a dangerous general artificial intelligence system. They claimed that we need better regulations and oversight over the leading research institutions to make sure that in the course of the AI race we would not accidentally create a system that will be harmful to humanity. The memorandum was not officially aimed at any single company, but it was very clearly meant to target the latest breakthroughs created by OpenAI. Even more strangely, when the CEO of OpenAI testified in front of Congress and was asked about how he would regulate AI, he proposed to create an international agency that would be allowed to regulate and audit all the leading research institutions. It seems as if the most powerful people working on pushing the progress of artificial intelligence seem to agree that we need more regulation to oversee the entire industry. It is very rare to see private enterprise ask to be more tightly regulated - what is so different about AI?

Sam Altman in front of Congress Sam Altman suggesting to introduce a word-wide organization to regulate AI

The main reasons why most people in the industry favor early regulation are there just being a realistic risk of a very bad outcome for humanity in case things go wrong and lessons learnt from the past decade of failed regulation in the digital currencies industry. The first reason, summarized as the AI risk, is the possibility of an artificial general intelligence system turning against humans. We don’t really know how big the risk of that happening actually is, but since it is a possible outcome, most people in the industry are very open to having plenty of international oversight early to make sure that humanity as a whole is in control of what’s being developed in those research labs. Here, trading a little bit of transparency and overhead for the sake of avoiding the worst outcome when moving towards general artificial intelligence is definitely worth it. The other reason for more regulation is much less spoken about. Digital currencies have existed since 2008 and ever since they were created there have been countless stories of how the technology is being misused and how consumers are being victims of countless fraud cases. Despite this, there has been very little clear legal guidance from any major Western jurisdiction. Neither the European Union nor the United States has created clear legislation on how digital currencies and other blockchain-related products are treated legally. This has slowed down the adoption of the technology by every mature industry and rightly so, since large companies have reputation and profits to lose in case they do not comply with “non-existing” rules. The lack of regulatory clarity in the blockchain industry pushed founders away from the US to start companies elsewhere, where the regulation is more clear (or there is no oversight). The companies that stayed in the US have been hit with heavy fines despite trying their best to comply with all the existing rules and interpreting them as best as they can. This has led to a weird situation where some of the leaders in the crypto industry in the US (for example CEO of Coinbase, the largest crypto exchange in the US) sued the SEC because they are not responding to Coinbase’s questions nor willing to give any clarity how the existing laws are being applied to companies. The leaders in the AI space have definitely learnt from that and welcome clear rules around data privacy and the development of more advanced intelligent systems. In the long run, this will increase the adoption of the technology by larger companies and thus push the entire industry forward.

SEC suing Coinbase Coinbase being sued since they were not ahead of the regulation

Despite no regulation slowing down adoption of new technologies by large companies, too broad regulation can also slow down entire industries as we saw with GDPR. Even though data privacy laws had to be updated, GDPR arguably went too far where internet users globally have to click through cookie banners that nobody reads nor cares about. With regulation you always have to get the balance right between protecting consumers and not stifling innovation. The regulators should learn from the parts of GDPR that did not have the desired effect on protecting consumer data and design the upcoming AI Act so that every constraint to developing and deploying AI models would allow oversight over the models without slowing down new innovators. Unfortunately, larger companies have the resources to comply with regulations, which makes the smaller companies the ones that most suffer from new regulation. Therefore, drawing a clear parallel from GDPR, we should make sure that the upcoming regulations for AI do not block open source projects from launching competing models. We need open source to thrive in the age of AI since that’s the only way to make the newest models available to all market participants, not only the ones who are willing to pay the closed research companies. Also, having competing open source models will make it easier for companies to fine-tune their models and self-host them to make sure customer data does not leave their premises. We should learn from previous regulation waves and make sure the rules protect consumers while not slowing down the less well capitalized players in the market.

We find ourselves at an interesting moment of time, where the EU is actively working on the AI Act and the US is slowly drafting their own legislation to control the AI industry. It is very likely that those laws will be passed already this year and come into effect by 2024. Even though the laws are not finally written, it seems likely that every company building and providing a service that uses advanced AI will need to go through audits and continuous monitoring. Even though that might seem like a high price to pay in order to use generative AI, it will speed up the adoption of the technology by even the most conservative industries and companies. Such regulatory compliance will be harder for smaller companies than incumbents due to needing more financial resources. Therefore, cheaper and easier solutions are needed to make companies of any size stay within the new rules. We have yet to figure out how generative AI will affect every job, but it is inevitable that every company and individual will be affected by this technology in the next few years. It will significantly increase the productivity of every worker and who knows - maybe the 4-hour work week dream will finally become a reality.

Back to home