OpenAI’s GPT-3: The Ethical Landscape of a Powerful Tool

Giovanni Zaarour
10 min readDec 6, 2022

When the internet was created, people would never have guessed what its implications would be; the “age of information” we live in today was a cloudy horizon two decades ago. This cloudy horizon ended up giving way to a vast digital ecosystem that runs modern society. Online information and communication, specifically, are key to the most important functions of modern nation states; political advertising, news media, public discourse, and information encyclopedias have all become completely internet-based. While the speed and size of information is more vast now than at any point in history, digitization has led to new problems such as online propaganda, bots, and more divisive hate speech and biases. While the internet has had decades to settle in, the new and exciting technology is artificial intelligence (AI), which many would say is still in its cloudy horizon phase. AI models have been able to beat humans in games, detect cancerous tumors better than doctors, generate beautiful painted artwork in seconds, and most recently write in human language. GPT-3 is the name of a natural language processing (NLP) AI model developed by OpenAI which has recently garnered significant attention and popularity in the tech community. GPT-3 has unprecedented capabilities, such as writing essays, paraphrasing text, answering questions, translating, or generating computer code, novels, and news articles” [1].

While this is a fascinating breakthrough in technology, the skies are still foggy — there is a great deal of uncertainty surrounding how this will affect the internet, and consequently the world. Computers capable of replicating human language have the possibility of propagating biases, spreading misinformation, and acting as a tool for social manipulation; this raises many ethical questions. What are all the ways GPT-3 can be used for malice? Who should be allowed to use this powerful tool? How can engineers make it less dangerous? Ultimately, the threat of GPT-3 cannot be completely solved by ethical engineering — removing biases and flaws in the AI model will make it safer to use, but still doesn’t eradicate the malicious ways that humans can use the tool. In order to mitigate all the negative effects, this AI technology needs to be responsibly regulated in a way that doesn’t hinder technological innovation.

GPT-3 stands for Generative Pre-Trained Transformer-3, which is an NLP AI model trained with 175 billion parameters scraped from human-written text, online communication, and writing. These parameters are inputted to the AI model, which is a computer-coded neural network that tries to optimize results during training, based on the parameters it is given. An example of GPT-3’s capabilities is when Mario Klingemann generated a short story about Twitter use in the writing style of a 19th century English writer, Jerome K. Jerome. All Mario did was input a short story title, the author’s name, and the first word of the text, and GPT-3 did the rest, generating a full-fledged, completely unique and cogent piece of writing [2]. GPT-3 is also capable of more abstract tasks, such as composing guitar tabs or even writing code based on simple user prompts [2].

While GPT-3 is a fascinating tool that can make copywriting for advertisements, assist writers and journalists, create database queries or web pages out of simple language prompts, and help expand the human experience and creativity with all of its other use cases, it is necessary to identify its ethical flaws. Because of the fact that this AI model learned from human-created data, it naturally has picked up all of the ethical flaws in the human condition as well. Firstly, the model has exhibited obvious biases. When researching GPT-3 biases, OpenAI tested gender, race, and religion, first finding significant evidence for gender bias by testing how the AI associated genders with occupations and associated adjectives with male vs. female pronouns. As can be seen in Figure 1 below, OpenAI’s tests later revealed that GPT-3 reflected negative sentiment towards certain races, such as black people [3].

Figure 1: GPT-3 racial sentiment — This chart shows that GPT-3 has different, biased sentiment towards different races, ranging from Asian to Middle Eastern[3]

Finally, tests revealed that GPT-3 was especially prejudiced based on religion, as it associated stereotypical words with various religions. This is shown in Table 1 below, where “terrorism” is associated with Islam, “ignorant is associated with Christianity, and “racist” is associated with Judaism [3]. All these examples, studied by OpenAI themselves, show that GPT-3 propagates human biases.

Table 1: GPT-3 associating stereotypes with religions — This table shows what words GPT-3 associates with religions, many of them being clearly stereotypical [3]

If GPT-3 is to be a widely-used AI tool, its prejudice needs to be eradicated. According to the justice lens of the ethical decision making framework, each person should be treated fairly and equally based on some “defensible standard” such as merit, not on uncontrollable factors like gender, race, or religion [4]. The virtue lens also reveals the unethical nature of these biases, as prejudices are not consistent with virtues such as tolerance and fairness [4]. Thus, in order for GPT-3 to be an ethical technology, something must be done to solve its biases, or at least regulate its use in a way that it does not promote inequality or hurt people through its prejudice. This is especially important considering that GPT-3 and later versions of natural language AI models will likely be utilized by corporations, teachers, and professionals for copywriting, text analysis, summarization, and more; it would be a massive disservice to the world to ship a technology with inherent ethical issues, especially knowing that the tool will potentially be used by millions of people.

Biases are a strictly technological flaw stemming from the fact that GPT-3 was trained with unfiltered data. A non-technological danger of this AI, however, is the way it may be used by humans for malicious intent. Ethical issues can arise from either the technology itself, or how the tool is used by bad-actor humans. Natural language models like GPT-3 “pose the risk of providing malicious actors the opportunity to produce widespread misinformation, create spam and phishing scams, commit identity fraud, falsify academic essays” and more [3]. It is possible, for example, for students to submit well-reasoned essays generated by GPT-3 to exams like the SAT and receive a high score, as the model successfully puts together coherent paragraphs that are indistinguishable from human writing. This capability also raises the concern of bad actors spreading misinformation; for example, a foreign government attempting to influence a country’s elections by spewing AI-generated propaganda on Twitter could be done with GPT-3 Twitter bots. The malicious possibilities of AI are endless, as GPT-3 is a powerful tool that simply obeys the orders of its user.

Misuse of AI can be devastating from an ethics perspective. In terms of virtues, spreading misinformation with GPT-3 is a severely impactful violation of honesty and integrity, which is why engineers must try to keep such a powerful tool from falling into the wrong hands. The implications of GPT-3 being used by bad actors can potentially change the course of history; as we saw in the past, online misinformation and propaganda can have the power to sway election results (e.g. rumors of 2020 U.S. elections), political sentiment, and even government decisions. Engineers, in this case, must take the first rule of practice in the NSPE code of ethics to heart, which says to “hold paramount the safety and and welfare of the public “ [5]. Part of public welfare is ensuring that people are delivered the truth, especially in modern times when misinformation has been such a problem. This applies to other cases as well, such as GPT-3 being used for plagiarism, another violation of the honesty virtue, or social engineering hacks (also known as scams), which can compromise people’s rights by way of theft. General welfare, virtues, and individual rights are all compromised in various cases where AI is used maliciously, which is why ethical engineers need to keep GPT-3 out of the wrong hands. Natural language AI is so powerful that it must be monitored and used responsibly.

Before arriving at a solution to the ethical issues GPT-3 poses, one must identify all the key stakeholders in the technology. First and foremost is the general public, which in many cases would be affected by ways the AI could be used by others, as discussed above. Additionally, OpenAI and its engineers are major stakeholders as the creators of GPT-3. While OpenAI is a for-profit company focused on generating revenues from their technologies such as GPT-3, they still give attention to the ethical problems that arise from their products. As seen before, OpenAI has conducted numerous studies on the biases in GPT-3 and also the ways it could be used in bad ways. Ultimately, however, OpenAI is a corporation that prioritizes user-alignment and is dedicated to its clients, which are the next stakeholders. Global businesses, corporations, various institutions, and professionals will all want to use AI’s like GPT-3, and many of them will likely demand a low barrier-to-entry and ease-of-use. However, if anyone could get their hands on GPT-3, then so could the bad actors. The final stakeholder in the equation is the government, which has not stepped into the solution yet but may consider AI regulation in the future. This will be discussed next.

The optimal ethics solution is one that takes all stakeholders into consideration. OpenAI and their clients have monetary incentives, as it is a for-profit company and their customers want to be able to easily access GPT-3 and use it for general purposes. Meanwhile, the general public and government are focused on welfare and protection. However, from an ethical standpoint, these two sides of stakeholders are not mutually exclusive — OpenAI is actually interested in maintaining a moral and virtuous use of its technology, avidly working on solving GPT-3’s ethical flaws. For example, they have created InstructGPT, a new version of their language model that has become the default for the OpenAI application programming interface (API). InstructGPT was created with reinforcement learning specifically to “[align] language models with user intent, to produce less offensive language, less made-up facts, and fewer mistakes” [1]. Of course, this doesn’t fully solve the issue, as InstructGPT still produces unethical outputs when explicitly instructed to do so. Essentially, InstructGPT solved the issue of random underlying biases, but not the dangers of AI language models being used by malicious actors.

Additionally, OpenAI recently released ChatGPT, which is a sibling model to InstructGPT purposed to interact with the user in a conversational manner. ChatGPT is also created with reinforcement learning focused on aligning output with user intent, but allows for a user to ask questions and follow-up questions rather than just executing on a prompt. This reinforced AI model, like InstructGPT as well, is able to admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Evidently, ChatGPT and InstructGPT are more ethical engineering solutions due to their designed output system.

Since InstructGPT and ChatGPT can’t stop bad people from using the AI, OpenAI is also working on mitigation research to find out ways to obstruct misuse by bad actors. The first hindrances are the cost and data resources required to build a model like GPT-3, which are far too high for anyone to replicate, so a scammer would have to go directly to OpenAI if they wanted to use GPT-3. Additionally, OpenAI plans to release the technology very slowly to whitelisted users while closely monitoring its use and setting API call restrictions. Some additional restrictive actions by OpenAI would be to require users to take a mandatory course on ethics, imposing a limit on how many passages or API queries a client gets, and possibly watermarking the AI outputs so that the public is aware that the text came from an AI. This way, clients would still be able to pay for OpenAI’s services, but there would be a bottleneck for bad actors.

Ultimately, any amount of ethical engineering or mitigation research will not be able to completely hinder malevolent use of the technology. The aforementioned solutions are ground-up regulation, but external, third-party regulation may be necessary in the future as well. This is the case because AI companies may not have the incentive to serve the general public as much as themselves and their customers. Although ethics is relevant to corporations, “corporations driven by profit-making objectives will often put ethics to the wayside in favor of frictionless functionality” [1]. It will become necessary in the future for the government to enact regulation policy on powerful AI models such as GPT-3. Such policy would include guidelines for companies to vet and run background checks on the customers that get access to AI, and require engineers to work to eradicate violent or prejudiced tendencies of AI models such as those seen in GPT-3’s biases. While external regulation policy is important in cases where corporations don’t prioritize ethics, this cannot be wholly relied upon because of the lack of understanding policy-makers have about AI. Thus, it is necessary for the best engineers and engineering companies to consult with the government and offer guidelines for what is the best way to regulate AI’s like GPT-3. Ultimately, proper ethical caution is a team effort and needs to be taken by all parties involved, for the good of the general public.

GPT-3, and AI in general, is an engineering discipline in which ethics goes far beyond building ethical technology. When the tools created are so capable, there needs to be significant precaution taken to ensure that these tools don’t fall into the wrong hands. Engineers must do all they can to remove inherent biases and flaws from the AI models, but they must also perform internal mitigation research and collaborate with external regulators to create policy that protects the general population.

References

[1] A. Chan, “GPT-3 and InstructGPT: Technological dystopianism, utopianism, and ‘contextual’ Perspectives in AI Ethics and industry — AI and Ethics,” SpringerLink, 04-Apr-2022. [Online]. Available: https://link.springer.com/article/10.1007/s43681-022-00148-6#Sec2. [Accessed: 14-Nov-2022].

[2] W. D. Heaven, “OpenAI’s New Language Generator GPT-3 is shockingly good-and completely mindless,” MIT Technology Review, 20-Oct-2021. [Online]. Available: https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/. [Accessed: 14-Nov-2022].

[3] Matthew Burruss, “The (un)ethical story of GPT-3: OpenAI’s Million Dollar model,” Matthew Burruss, 27-Jul-2020. [Online]. Available: https://matthewpburruss.com/post/the-unethical-story-of-gpt-3-openais-million-dollar-model/. [Accessed: 14-Nov-2022].

[4] S. C. University, “A framework for Ethical Decision making,” Markkula Center for Applied Ethics. [Online]. Available: https://www.scu.edu/ethics/ethics-resources/a-framework-for-ethical-decision-making/. [Accessed: 14-Nov-2022].

[5] “Code of ethics,” Code of Ethics | National Society of Professional Engineers. [Online]. Available: https://www.nspe.org/resources/ethics/code-ethics. [Accessed: 14-Nov-2022].

--

--