Who Bears the Burden of AI Mishaps?

Artificial intelligence

In an uncommon situation, technology developers and policymakers are now collaborating to consider regulation before fully understanding the impacts of the technology. This is a positive step, especially with the recent instances of AI technology errors. For example, a generative AI app used by a supermarket in New Zealand mistakenly suggested a recipe for chlorine gas and labeled it as 'aromatic water mix', promoting it as a non-alcoholic drink option. While this shows a glimpse of the technology failing, experts like Geoffrey Hilton, a leading figure in AI, warn that the technology could fail on a much larger scale, posing a serious threat to humanity. Sam Altman, the CEO of OpenAI, also acknowledges that if the technology were to falter, the consequences could be significant.

Because AI is a complex and constantly changing field, it is quite difficult to regulate this technology. The fast-paced evolution of AI makes it almost impossible to establish rules that will always be applicable. Moreover, AI is used in various industries, such as healthcare and finance, each with their own set of advantages and risks. However, as different regions think about implementing regulations for AI, policymakers globally should understand that the key principle in regulating AI is to hold the responsibility of its developers or creators.

AI Firms Push For Regulation On Own Terms

Earlier this year, Altman expressed his support for the regulation of AI during a hearing in the US Senate. He proposed the establishment of an international organization, similar to the UN's nuclear watchdog, to oversee and control AI technology. Not only Altman, but also big tech companies like Microsoft and Google, who are leading the way in the development of generative AI, have shown their agreement with the need for AI regulation. It is encouraging to see these major corporations acknowledging the dangers of AI. However, it is clear that they prefer regulations that serve their own interests. During his visit to London, Altman stated that he intends to comply with EU regulations, but if any difficulties arise, his company may cease its operations in the continent.

In addition, a recent investigation conducted by Time revealed that OpenAI secretly engaged in lobbying activities within the European Union (EU) in order to avoid stricter regulations on artificial intelligence (AI). OpenAI's lobbying efforts proved to be successful, as the final version of the AI Act, which was approved by EU lawmakers, did not include certain provisions that were initially suggested. It is worth noting that Microsoft, Meta, and Google, who have collectively invested nearly USD 10 billion in OpenAI, had previously attempted to weaken AI regulations in the region through their own lobbying efforts. Unlike OpenAI, these companies have a history of lobbying various governments on a range of issues, such as privacy regulations, copyright laws, antitrust matters, and internet freedom. Given their previous actions, it is important to acknowledge that these corporations might advocate for AI regulations that align with their own interests, rather than placing the welfare of the general public as their top priority.

AI Companies Must Be Involved By Policymakers

However, it is essential for policymakers to include these organizations in any talks about regulating AI. As leaders of the AI revolution, using their expertise in creating AI systems, these organizations have already earned a significant role in discussions about AI regulation. Additionally, policymakers may understandably face difficulties in understanding the intricacies of AI, as they may not be very familiar with it. Therefore, it becomes crucial from a regulatory standpoint to collaborate with the individuals who develop and create AI technology.

According to Giada Pistilli, who holds a prominent role as an ethicist at Hugging Face, she expresses her apprehension regarding the strong influence of major tech companies in advocating for regulation of artificial intelligence (AI). Although it is natural for these companies to be involved in such discussions, as they are directly impacted by the regulations, Pistilli believes that their perspectives can offer valuable insights. However, she also cautions that the influence and agendas of these companies may sometimes lead to biased advice. Therefore, it is crucial for us to carefully examine their motives when considering their participation in these conversations, Pistilli conveyed during an interview with AIM.

"Do they come to simply offer their expertise and point of view? Or do they aim to have a larger influence on policy and institutional decisions, which can have long-lasting effects?" It is essential to weigh their input against the broader public interest in order to shape the future in a way that benefits the majority, not just a few. However, policymakers must also keep in mind that while regulations are necessary for responsible AI implementation, excessive restrictions could impede innovation and hinder competitiveness. Therefore, finding the correct equilibrium between regulation and innovation is key, as stated by Gaurav Singh, the founder and CEO of Verloop.io.

Furthermore, the complexity of AI further complicates the task of regulation. Pistilli argues that another factor contributing to the difficulty is the inherent slowness of the legislative process, which is a result of its democratic nature. Pushing for faster regulation to keep up with technological advancements could unintentionally put democratic principles at risk, which is highly dangerous.

"In our effort to anticipate all possible dangers, we occasionally establish rules that are so extensive that they may not apply to particular circumstances, emphasizing the deficiencies of a completely cautious approach. This emphasizes the fact that there is no universal answer. It is essential to consistently review strategies, communicate with specialists, and above all, consult those who are directly impacted by the technologies in order to determine the optimal path forward," she expressed.

With tech giants actively engaging in discussions about regulating AI, it is essential for policymakers to acknowledge that these companies are the ones responsible for bringing this technology to the world. Therefore, it is equally important that they are held responsible if the technology malfunctions. Pistilli argues that while responsibility for AI should be a collaborative effort, the majority of both moral and legal accountability should lie with the developers of AI systems.

"It is a simplification and quite frankly unfair to criticize users by saying 'you're doing it incorrectly' without giving them thorough instructions or a clear grasp of how it should be used. As I've consistently emphasized, providing a 'magical box' or a complicated, unclear system to a large audience carries numerous dangers," she expressed.

The uncertainty of human actions, along with the immense capabilities of AI, makes it extremely challenging to anticipate all potential instances of abuse. Consequently, developers have a crucial responsibility not only to design responsible AI but also to ensure that its users are properly educated and equipped to utilize it responsibly. Annette Vee, an associate professor at the University of Pittsburgh, observed that due to the rapid and somewhat hasty implementation of AI models, there is a likelihood of limited testing prior to their release. These models will be publicly "deployed," with companies assessing the extent of their impact and addressing any cleanup afterward.

In a blog article, Gary Marcus, a prominent critic of artificial intelligence (AI), has also expressed his view that tech companies nowadays haven't adequately foreseen or planned for the possible outcomes of quickly implementing advanced AI technology. Therefore, it is of utmost importance to hold the creators of these technologies accountable. This approach will encourage companies to exercise caution and be more vigilant before launching a model that hasn't undergone extensive testing and examination.

Singh agrees with Pistilli to a certain degree. He thinks that dealing with prejudices in AI systems requires a varied approach. "Although it is crucial to hold creators accountable, it is not the only solution. Elaborate AI algorithms can be hard to comprehend, making it tough to justify their decisions. Regulations could enforce standards of transparency and explainability to facilitate a better comprehension of how AI reaches its conclusions," he explained to AIM.

Against Transparency

But would AI companies support transparency? Probably not. OpenAI has not disclosed important information about GPT-4, such as its structure, size, computing power for training, how the dataset was created, or even the training approach. While OpenAI may be trying to safeguard its proprietary information, or keeping details hidden for security or ethical purposes, this only increases the potential risks.

Frequently, biases in AI models originate from the dataset or during the training phase. The selection of training data can uphold historical biases and lead to various types of harm. To mitigate these negative effects and make informed decisions regarding the unsuitable deployment of a model, it is crucial to understand the inherent biases within the data. Conversely, Google has consistently resisted regulations that require an examination of their algorithm. The large technology company has traditionally kept its search algorithm under wraps, considering it a valuable trade secret, and has been hesitant to share specific information about how it functions.

Don't Fault The Machines

According to Altman, there is a chance that AGI could come into existence within the next ten years. Though we are currently in the middle of 2023 and superintelligence is still a ways off, there is a common belief that AI systems have the ability to cause harm. Pistilli argues that this narrative implies that our main worry should be the AI systems themselves, as if they have their own autonomy, rather than directing attention to the individuals who create them.

"I perceive this as a strategy that not only enhances a fear-based storyline but also cleverly redirects attention from the human participants to the technology. By doing so, it places the complete burden on the technological creation, conveniently absolving the humans who conceived and manage it. It is important to acknowledge this shift in responsibility and ensure that the true masterminds behind these systems remain in the center of accountability," she remarked. While the question of how close we are to AGI is a separate debate, it is vital to prevent such discussions from gaining traction. If a future version of the GPT model, demonstrating AGI characteristics, encounters issues, OpenAI should be held responsible, not the superintelligent model itself.

Read more
Similar news