You are here

ChatGPT – AI for everyone. About time to regulate it

24.02.2023

Winter 2022/2023 seems to be a spectacular time in the history of Artificial Intelligence. It is so even though the current hype is an effect of mass popularization rather than a real scientific revolution, which in this field began quite a while ago. With the rapid growth of new users of products based on generative AI (or GP AI), its potential to wreak havoc raises an increasing number of serious concerns.

The law, which usually struggles to keep up with new technologies, in this case, seems to arrive right on time. By the end of 2023 we shall see the final text of the AI Act – the EU regulation tackling the challenges brought about also by generative AI. Whether these more and more popular systems are going to be listed among the high-risk use c­ases, is – unfortunately for citizens – still up for debate.

AI in every homestead

Between November 2022 and February 2023 the number of searches for the phrase “AI” in Google increased by half worldwide, while in the case of Poland the hike was almost threefold. Within a month after its launch, ChatGPT was already being used by more than 100 million people, which makes it the fastest-growing app in the history of the internet.

The sudden popularity of this topic is a result of releasing new generative AI-based tools to the general public. Among the most popular ones are ChatGPT and Dall-E (by Open AI), Stable Diffusion, Midjourney, and Lensa.AI (by Prisma Labs). What do they have in common? They can generate content seemingly similar or even identical to what a human would create: illustrations, photos, articles, manuals, poems, lines of computer code, voices, or videos. Soon, generative AI shall most likey be used in a broad range of industries, i.a. robotics, medicine, marketing, or the military. Even before it happens, generative AI already changes the way we think about technology.

The AI media frenzy is far from over, and many voices are predicting revolutions in numerous dimensions of our everyday lives. Will AI bring about the “end of work”, “the triumph of disinformation”, or “the demise of humanity as we know it”? Has the computer already passed the Turing test? Even actual AI developers fall for such narratives. In one instance, a Google engineer claimed that LaMDA, a chatbot based on a generative model he worked on “became sentient”. It's hard to downplay such claims when such questions (regarding ChatGPT) are being asked even at top scientific conferences.

What is generative AI?

Instead of diving into the philosophical debate about the “technological singularity”, let’s talk about what AI – and its submodel – generative AI – really is.

Artificial intelligence is a computer programme, which, instead of running precise instructions, uses advanced statistical models to solve a problem. “Intelligence” in this case means that the programme – basing on a dataset with problems and correct solutions – finds existing patterns.

Let’s then take a closer look at the recent phenomenon of generative AI. And remember that, just like in the case of any other past technological development, we are not helpless. On the contrary, we are the ones to make decisions: as creators, consumers, voters, and last but not least, voters.

Neither of the two concepts – “Generative AI”, and, often used interchangeably (despite relating to a broader category), “General Purpose AI” (GP AI) – come from Academia. They were introduced as handy phrases describing multi-functional programmes developed thanks to the recent progress in AI research. Thanks to their practicality, they made their way to official documents, including drafts of EU regulations.

According to academic researchers, the more precise definition of the new technology is "machine learning algorithms allowing modeling the aggregate distribution based on the data provided". A distinctive type of such algorithms are so-called large language models used to process information structured in both natural (Polish, English, Sanskrit, etc.) and formal (e.g. JAVA, Python, R) languages, as well as binary coded representations of images and sound.

This category of AI includes programmes, which – following a voice or written command expressed in a natural language – generate text, sound, image or code. It's applications include chatbots, virtual assistants, models processing (e.g. aging) photos which deliver results that not long ago could only be created by a human with proper skills and resources. It has also been argued for instance, that GPT can be used as a sort of self-learning operating system capable of interacting with or controlling other devices and programmes. The full potential of this generation of AI is still unknown. Yet, it already inspired some experts to speculate that humanity is on the threshold of a technological and economic breakthrough.

Will the new AI replace search engines?

Contrary the current media hype, some experts are not impressed by the capability of generative AI. Yann Le Cun, renowned mathematician and vice-CEO of Meta, believes that at least from the scientific point of view today’s generative AI models are not very innovative. The spectacular effects they deliver can be attributed largely to tens of thousands of hours of painstaking efforts by the OpenAI developers and engineers, who tweaked the computational obstacles and corrected the results of algorithmic data processing used to train the artificial neural networks GPT runs on. Le Cun believes that large language models (at least the ones known to him) are pretty much useless as a source of precise and certain information (that is: will not replace search engines). However, they may be useful as extra help with many tasks such as writing, planning or computer programming. Le Cun is not alone in his scepticism. Polish AI scientist Piotr Gawrysiak (Warsaw University of Technology) believes that despite many advantages there is no reason to proclaim a scientific or business revolution as of yet. Arvind Narayanan, a Princeton computer science professor, is less subtle in his criticism and calls ChatGPT a “bullshit generator”.

But even the harshest skepticism does not cool down the popularity of ChatGPT, and generative AI quickly became the most discussed topic in the tech sector. Towards the end of 2022 Google (Alphabet) CEO Sundar Pichai allegedly announced a “red alert” for the company. What was the reason? ChatGPT threatens to undermine Google's dominant position in the search engines market. Soon after, the company disclosed its plan to launch its own multi-functional chatbot called “Bard” (so far, with limited success).

Microsoft took things to the next level. In January its CEO, Satya Nadella, right after investing 10 billion dollars in OpenAI, announced that new AI-based functions will be incorporated into the company’s products. Not long after, the first demo version of Bing search engine with a built-in module based on ChatGPT was presented. Needless to say, products based on similar technology are allegedly being developed by other tech companies, incl. Chinese giants Baidu and Alibaba.

Is there a reason to be scared?

Even though this new type of generative AI has only recently been made available to users, the list of (often loudly voiced) concerns is already long and growing. The most pressing issues include:

  • Legal liability

Let's imagine your photo or voice was used to create a deep fake, which was consequently used to blackmail someone or ruin someone’s reputation. Who shall be held responsible? According to the ChatGPT terms of use – the person who used the application to create it. But should those who created the programme and did not protect it from abuse, be also made liable? Which company would be liable in the case of the Microsoft and OpenAI merger? Terms of use should not be decisive when the victim of such abuse lays goes to court or lies a complaint with a regulator.

  • Discrimination and bias

AI systems may lead to discrimination because of a person’s gender, ethnicity, disability, etc. Whether the AI will repeat human biases, depends on the data sets used to train it, as well as on the sensitivity of the system developers. Luckily, at least some generative AI developers are aware of this problem. After external experts proved that OpenAI algorithms used in Dall-E more often generate images of white men, sexualized images of women and strengthen racial stereotypes, the company decided that a “censored” version will be made available to public. Without proper regulations, potential victims of abuse may only rely on the internal, non-negotiable ethical standards of companies developing such systems. The law does not determine yet who shall take legal responsibility for the negative consequences of biased models or even how to effectively prove that discrimination took place.

  • Even less privacy

Generative AI models use publicly available data sets (including sets with personal data). All of us help to train them, often unconsciously, e.g. when we publish photos on social media or upload images to a programme just to generate an image of an aged, younger, or just perfected self. Thus trained algorithms are then used for completely different purposes: like more detailed profiling and targeting, by both private companies (for marketing purposes) and public institutions (for surveillance). It’s still more dangerous when used by such companies as Meta or TikTok, whose business models are based on monetizing our privacy. If they decide to use their vast amounts of data to train neural networks, it may turn out that generative AI knows more about us than we could possibly imagine.

  • Disinformation

Creating fake news or deep fakes is now easier, faster, and cheaper than ever. We will soon learn whether the mass production of photos, texts and videos created with AI-powered tools becomes the standard weaponry of unethical political marketing. The widespread use of generative AI may result in a further decrease of trust in the media and shut their audiences in tighter information bubbles. This risk could be tackled by obligatory watermarking AI-generated content – even if a human is responsible for the final result.

  • Development of unethical business models

According to a number of experts, the current trend in AI development sends bad incentives to the whole industry. Even though companies like Google and Meta develop similar services, so far they have not made them available to the public – due to several challenges, including the ambiguous status of data sets, the scope of false and harmful content created by the models, and responsibility towards shareholders. Critical voices come from representatives of both the civic sector (Daniel Leufer of Access Now) and the industry. Emad Mostaque, Stability AI CEO, criticized OpenAI for its errors and potential threats resulting from the unclear legal status of data sets used to train the AI models.

That is just the beginning of the list of current and potential problems. Generative AI may be used for massive cyberbullying, internet scams, and other criminal activity online. It will affect education, science, and copyrights – Getty Images already sued the developer of Stable Diffusion (Stability AI) for the alleged illegal use of their database of images.

Media, especially those which base their revenue on "clicks", may face an even bigger problem, as ChatGPT can analyze news from various sources without the necessity of visiting the websites themselves– and thus reduces the number of human interactions. The popularity of generative AI also reignites the debate on technological unemployment and economic inequality. Even those who until recently held skeptical views on automation replacing people at their jobs (like professor Deren Acemoglu), now openly raise alarm and claim that massive introduction of AI-based solutions may lead to further inequality, discrimination in the access to services, as well as political radicalization.

Artificial intelligence and law: what works? What is missing?

Do all these challenges mean that we are at the mercy of companies using AI in their products? Definitely not. AI already falls within the scope of numerous national and international laws, regulations, treaties, and technical standards. Major regulations in place include:

  • General Data Protection Regulation (GDPR) regulates all AI models available in Europe, which use data of natural persons;
  • Copyright law helps artists, whose works were used to train AI models and generate new content, to seek compensation;
  • Cybersecurity law oversees the use of AI-based products in critical infrastructure;
  • Digital Services Act (DSA) and Digital Markets Act (DMA) are to regulate how online platforms such as Facebook, YouTube or TikTok use content recommendation and moderation algorithms. According to the DSA and DMA, presumably from the beginning of 2024, all platforms will be obliged i.a. to publish risk analyses of their services, as well as report to the European Commission risk-mitigating measures they introduced.

Are these regulations sufficient? Unfortunately, there are still many white spots on the AI regulations map. Moreover, various aspects of AI are regulated and supervised by different institutions, which results in a diffused responsibility and makes it easier to apply the rules selectively.

New AI law

The European Union will soon have a new regulation dedicated exclusively to Artificial Intelligence. AI Act was originally supposed to promote a European, trustworthy, ethical, and human-centered perspective. Yet, the project presented by the European Commission in April 2021 failed to meet these ambitious expectations. The amendments proposed by the European Council in November 2022 did not make it much better either.

Together with other members of European Digital Rights, Panoptykon Foundation recommended introducing the AI Act measures which would protect our rights in the digital era. Our demands include that:

  • people were able to lay a complaint in their own country, shall their rights be infringed;
  • people were allowed to demand explanation of important AI-based decisions (even if there was a human in-the-loop);
  • CSOs, like Panoptykon Foundation, were allowed to represent persons whose rights were violated;
  • companies and institutions, private and public (including the security sector), were obliged to analyze the impact of their systems on human rights;
  • people were informed that they face a system based on AI – especially if this intercourse poses a real risk their rights may be violated.

What's next for AI? Proposed amendments in the AI Act

The hype over generative AI was not overlooked by the EU regulators. Thierry Breton, an experienced French diplomat as well as top-level manager, and a current Commissioner for Internal Market of the European Union asked about the ChatGPT replied: “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data”. Luckily for us, many seating MEPs share his opinion.

The final EU Parliament’s version of the AI Act will be announced in April 2023, but MEPs attempts at tackling the issue of generative AI are regularly being leaked to the public. So far, Parliamentarians active in this process have mainly focused on instances when the AI-generated content resembles (or poses as) human creation. According to this perspective, the ‘high-risk’ category would also include systems that produce deep fakes: images, sounds, and videos presenting real people, or rather: their digital avatars doing or saying things they never actually did or said. Defining generative AI as a high-risk system would in consequence oblige its producers to run an audit before the launch of the system on the market as well as permanently monitor it for possible risks, including human rights violations. According to the working compromises, the Parliament's version of the AI Act would also include exemptions to the rule, like using AI in artistic and satiric activity or creating a clearly fictional character.

On the other hand, the version of the AI Act drafted by the European Council differentiates the risk posed by generative AI according to the purpose it is used for. Thus, the producers of general purpose AI systems (GP AI) applied in high risk areas such as law enforcement, education, or the labor market, shall follow a set of obligations, incl. the mandatory registration of the system, having an office in the EU, auditing and monitoring the system (pre-and-post launch), as well as following data protection regulations. The Council has nonetheless introduced a “backdoor”: the producer won’t have to follow these rules if they do not label the system as applicable in the high risk areas.

According to the European Parliament calendar, the new regulation (including articles on generative and/or general purpose AI) should be passed by the end of 2023. As past negotiations between the European Parliament, Council, and Commission taught us, we should be expecting numerous changes to the original proposal. One thing is certain: the list of risks generated (sic!) by generative AI will only get longer. It should not come as a surprise: the fast-growing number of users and market applications is the best reason to regulate it in a complex way.

Our main demands for the regulation of generative AI include:

  • introducing the obligatory fundamental rights impact assessment of AI systems, i.a. its impact on health, democracy, and environment;
  • introducing the obligation to label systems as AI-based for the consumers;
  • ensuring the right for people and groups, whose rights were violated, to be represented by CSOs.

If the final version of AI Act does not include such measures, the already uneven fight against algorithmic discrimination, internet aggression, hate speech, disinformation, and other harmful uses of AI will only become more difficult. That’s why human rights advocacy for the new law in this area is so crucial.

Filip Konopczyński

Cooperation: Katarzyna Szymielewicz, Maria Wróblewska

Tags: