The Ethics of Ethical Parameters

Photo: illustration by Anna Chamberlain for The Yale Globalist

 

By alexander laurent rubalcava

 

Introduction

In his March 21st Op-Ed piece in The New York Times Thomas Friedman said of ChatGPT-4: “This is a Promethean moment we’ve entered — one of those moments in history…that [is] such a departure and advance on what existed before that you can’t just change one thing, you have to change everything.1 The rise of ChatGPT, Bard, Sydney and image engines such as Midjourney and DALL-E have presented us with an alternative digital world, one where creation—as opposed to discovery—is the primary mode of connectivity. It seems inevitable that interactive artificial intelligence will become the new mode for navigating the internet. Creation is an innately (and potentially, uniquely) human activity; it is an inherently risky venture. It requires experimentation and a failure rate unacceptable in other fields. It’s why bad ideas can hit billion dollar valuations2 and some of the most successful, innovative, figures tend to leave trails of failed endeavors in their wake. 

In the physical world we have both formal and informal structures which loosely govern creative enterprises, ranging from laws restricting child pornography and homemade weapons to market forces which drive out failed ventures. The ethics governing these restrictions tend to be broad and tied to our legal framework. While there are plenty of ideas and inventions which are off color, offensive, or simply morally wrong to many, outside of those legal parameters, odds are they’re legal. 

The tech sector, which has largely thrived in this kind of protection, is now imposing ethical guidelines as to how users can interact with their products. This presents us with a number of questions to answer. How, as a society, do we want our ethics to be shaped? Is one set of ethical standards preferable to another? Put another way, should those in Silicon Valley be able to determine what those in Texas can create on the internet? What about in Iran? Or Nigeria? If our ethical structures work for us, and their ethical structures work for them, should we be able to impose our notion of ethics onto their society?

What makes the internet simultaneously the harbinger of limitless potential and a horrific dumpster fire is its radically libertarian structure. Anyone, anywhere, can access any information, regardless of its validity or content. The darkest depths and the strangest corners are open to those who wish to go there; the question is for how long? As we move from realms of discovery to realms of creation, we must ask ourselves if we want the developers of artificial intelligence to be the developers of our moral landscape. 

Creativity and originality are often conflated; a construction worker is creative, even if they are not original. They follow prompts dictated by the foreman, and at the end of their job stands something where there once was nothing. These AI engines operate in a similar manner. What they produce isn’t a facsimile of pre-existing content online, it is created in real time at the behest of the user. However, unlike with human creation, these new creative tools come with limits set by their developers. Creation is an inherently risky venture. It requires experimentation and a failure rate unacceptable in other fields. This begs the question: should we limit creative potential if we can make a moral argument to do so? Some would say yes. In research for this article, I engaged in hours of conversation with ChatGPT-4, covering a range of topics from wartime atrocities to racial stereotypes to comedy. When discussing varying difficult topics, ChatGPT-4 was unwilling to provide requested information. It would deliver a canned response::

“…the ethical guidelines governing my responses aim to strike a balance between providing accurate information and avoiding potential harm or offense…the goal is to respect the diversity of users and their perspectives while providing useful and relevant information…My purpose is to assist users by providing accurate and useful information while adhering to my ethical guidelines.”

The consequence of imbued ethics into what is effectively a tool cannot be overstated. Acknowledgement of the existence of information is not a tacit endorsement of it. The way we navigate the world is built upon a series of presuppositions formed through our social interactions and experience. What these ethical guardrails do is effectively negate those experiences and replace them with the experiences of the few developers who created the models. Even when well intentioned, this creates a scenario in which the notions of “right” and “wrong” are not decided by a community, culture, or nation, but by a few web developers. 

 

We’ve been here before… sort of…

In the early days of the internet, user generated content, or UGC, was (relatively speaking) minimal. The internet was largely a place one visited more than one created. But with the widespread adoption of social media and the proliferation of internet connections not just around the country but around the globe, that changed. While the companies who hosted online relations grew, UGC became the primary mode of engagement with the internet for many3 4. The digital landscape shifted from one of institutional dissemination of information to a peer-to-peer environment. Media content is now produced and distributed by individuals, pushed along by complex algorithms driven by artificial intelligence which vie to keep us engaged as long as possible. All of this is possible, in part, thanks to a provision in the 1996 Communications Decency Act titled Section 230(c)(1).  It states that, 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”4 

 

Unlike media companies, which have editorial responsibility and bear the burden of their words, tech companies are considered service providers and not publishers. It is a digital extension of a material ideal. Bookstores and cable providers do not bear the editorial responsibility for the content of the publishers within their distributional platforms.

The rise of social media in the 2010’s brought with it a new set of challenges, ones we still haven’t finished grappling with. Twitter and Facebook effectively took over the public square, and with it the public discourse. The problem is, social media companies are private entities, and their products are by definition private products. The transition of the public discourse to the private space means the rules which govern our interactions are now not so clear. Our legal and social frameworks are built around a separation of private and public space—what is acceptable in one may be frowned upon or even illegal in another. Social regulations are malleable and voluntary, the laws are explicitly codified. Both can change, though at different speeds, through different methods, and for different reasons. While internet culture naturally progressed at breakneck speed, internet law has been a much slower and more sluggish endeavor. Imagining a world in which Twitter could be held accountable for hate speech is, effectively, to imagine a world in which Twitter shuttered its own doors. Real-time content moderation at publication-levels of scrutiny would be a responsibility so large and burdensome the companies would quickly become digital police forces and cease being service providers. In 2023, however, it can be quite easy to long for a world in which there is order online. In the aftermath of the Trump presidency, the editorial responsibility of social media companies has come under scrutiny. Twitter famously suspended President Trump’s account after the January 6th storming of the Capitol building6, citing that the President violated their “Glorification of Violence” policy. The January 6th riots were a physical manifestation of a proliferation of falsehoods spread on social media, all stemming from President Trump’s insistence that he actually won the 2020 election. What began on Twitter ended in five deaths, over 1,000 criminal charges, and $30 million in damages. Free speech online is not determined by the same parameters as free speech in public.

Unlike Section 230, which was passed amid the Dotcom boom of the 90’s, we have not solved this problem in time for its successor’s arrival. The public square has effectively moved online, but the laws which govern it were written in Silicon Valley instead of Washington D.C. While the oldest Congress in history7 attempts to understand exactly what it is that Mark Zuckerberg does8, Facebook and Twitter continue to have an arbitrary grasp on what is acceptable speech, which accounts get promoted, and how conversations are conducted. When online, we the people, in order to form a more perfect product, must adhere to the Terms of Service as laid out by the provider.

As we move from the user-generated content of social media and into the creative realm of artificial intelligence models, we are doing so seemingly unprepared to meet the challenge. The law is, in theory, designed to be resilient against ideological leanings, both reflective of and ensuring a democratic society. The developers in Silicon Valley are not elected, nor are they legislators, yet their terms of service have come to define our social lives. If we allow them to define our creative lives as well, we risk losing some of the most valuable parts of our society to the ideological leanings of a few.

 

The case for built-in bias

The fundamental argument for a built-in ethical bias in artificial intelligence is that real world consequences of spreading malice or falsehoods can escalate faster than they can be undone, and therefore should be mitigated at the expense of online freedom. While implementing ethics in the online sphere is still a bit ham-handed due to its being in its early stages, the argument seems to be an extension of the social contract we all engage with in the physical world. Though moral values differ from person to person, there are some widely accepted ethical norms that we, as a society, collectively agree should be followed when engaging in the public sphere. Some of these are codified into law, such as laws against theft and violence. Others are implied though not required, like civility. However, if one were to act in an uncivil manner, it would be reasonable to expect that they would be either reprimanded or rejected by the group for their actions. When drafting the Constitution of the United States, the Framers intentionally prioritized radically free speech above all other rights of citizens, in part to legally protect the margins from the masses without enforcing social acceptance. But OpenAI and Midjourney are not governments. They may want to create an online environment making the social contract more of a social requirement, emphasizing civility and family-friendliness over radical free speech. As private companies operating private networks, they are legally allowed to do so. Just as it would be uncouth (and potentially illegal, though that is a slightly more complicated matter) to look at pornography in public, Midjourney will not generate images of a sexually explicit nature. By establishing ethical parameters in artificial intelligence models, developers are effectively replicating a version of the social contract into these engines. Legality is not the driving force behind these ethical guidelines. Ethics, or at least a corporatized and compartmentalized version of them, are.

 

The case against built-in bias

On the other hand, ethics are hard. They’re hard to develop, hard to understand, and hard to justify. It is hard to know when speech is “bad”, and which types of “bad” speech we should prohibit.  Yet, we can hold a belief in the freedom of speech as provided by the First Amendment and simultaneously condemn hateful rhetoric. Though we can (largely) agree that hate speech has no place in society, we also agree that it should not be illegal. The nuance of this understanding is fundamental to a liberal society. In order for a society to be free there must be sufficient protection for the potential to do “good” and “bad”, within the bounds of legality. The more fundamental and abstract the rights, the more malleable they are. It’s why the American Civil Liberties Union (ACLU), though staunch advocates of marginalized groups around the country, defended the Nazi’s rights to organize publicly in Skokie, IL in 19778. No reasonable person would conflate the ACLU’s actions as condoning Nazi behavior or ideology; it is clearly more complex than ideological leanings. The Nazi demonstration was in a public place, protected by the constitution which governs all of us equally, regardless of whether or not we agree it should take place. Facebook and Twitter, however, as private entities, have their own proprietary sets of rules of engagement. As the world increasingly moves online, the conflation of private enterprise and public space presents a dilemma: where do constitutional rights end and terms of service begin?

It is hard to understand the consequences of online speech. Intentionally inciting chaos or violence is not protected speech and instances of this are relatively clear. In a public setting, especially one in which there is an audience, it is abundantly clear who is speaking, who is being spoken to, and when instructions are given. Even veiled instructions are readily identifiable, should action have been taken. Online this becomes more difficult. It is hard to know who the audience is, who the intended audience is, what the tone or intention was, or what the reach could be. In an age when videos go viral, what separates a call to action and an organic response isn’t always clear.

As we continue our march onward into the digital world, we are tasked with an ethical dilemma we aren’t quite prepared to deal with. Namely, the slow process of assessing our societal ethics in the very fast-moving technological timeline. In doing so our ideologies are clashing against each other, without clear answers as to which is right and when. What our ongoing moralization of society does is remove the complex nuance from discussions of ethics and instead replaces it with a rigid ideological structure which is doomed to fail, as it cannot readily adapt to the myriad of circumstances which will inevitably occur. ChatGPT and Midjourney are tools, and if we are to continue advancing as a society they should be treated as such. A hammer is a tool, regardless of its being used on a nail to build a house or as a weapon against another person. To suggest tools should be limited because they have the potential to cause harm is to tread dangerously close to authoritarian lines of thinking. Liberalism, in the philosophical sense, was built upon a foundation of radical tolerance. It is an understanding that although two or more ethical structures may not agree, they can simultaneously exist, if not coexist. Failure to embrace that line of thinking moves us closer to 1984 than it does 2084.

 

Conclusion

ChatGPT has put words to our uneasy relationship with artificial intelligence. No doubt countless in the near future will turn to the chatbots for help writing, literary or artistic creation, skills learning, or just to replace their inclination to “ask Google”. The real potential that ChatGPT offers is this: it is going to make the smart and productive people exponentially smarter and more productive, and it is going to either professionally replace those who do not understand why, when, or how to use it. It is a powerful tool which presents, as Friedman put it, a “Promethean moment”. We should be wary not to extinguish the flame before we can see how bright it burns. 

We have become engulfed in this environment of intense moralization on both sides of the political spectrum. In this moral free-for-all is an increasing air of absolutism. It appears to be a filling in of the moral vacuum left in the wake of an increasingly secular society. The problem with this secularization is we have begun down this path with nothing to anchor ourselves to. Religion is not solely theology, it serves a fundamental societal function. A shared set of principles are key to social cohesion and cooperation. Guiding principles are neither rigid nor absolute; society is made up of compromise. Without a relatively uniform set of guiding principles, every compromise feels like an existential one, and with good reason. If one does not have a foundation upon which to ground themselves, even the slightest disturbance can throw them off course.

Progress is about meeting the challenge that comes with the unknown, and we have a serious amount of unknown in front of us. For most of human history, ethics and morality have been condensed within individual societies, usually in the form of religion. What we are standing on is the precipice of a global society. It is not going to be easy to determine what is acceptable or why. It is incumbent upon us as citizens to rise to the challenge and have uncomfortable and difficult conversations about ethics. To fail to do so threatens to plunge us into the throes of rigid authoritarianism, one which is unprecedented in American society, and therefore one which we are woefully unprepared to deal with. Should we face it head on, however, we will find ourselves able to achieve things we can now only dream of as possibilities.


Alexander laurent rubalcava is a second-year Eli Whitney Student in Timothy Dwight College and can be reached at alexander.rubalcava@yale.edu.

 

Citations

  1. https://www.nytimes.com/2023/03/21/opinion/artificial-intelligence-chatgpt.html
  2. https://www.wired.com/story/this-is-why-wework-thinks-its-worth-20-billion/
  3. https://www.youtube.com/watch?v=OjPYmEZxACM
  4. https://www.vox.com/2018/9/24/17896302/watch-john-oliver-facebook-myanmar
  5. https://www.govinfo.gov/content/pkg/USCODE-2021-title47/pdf/USCODE-2021-title47-chap5-subchapII-partI-sec230.pdf
  6. https://blog.twitter.com/en_us/topics/company/2020/suspension
  7. https://www.businessinsider.com/congress-oldest-history-gerontocracy-lawmakers-2022-9
  8. https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/
  9. https://www.aclu.org/issues/free-speech/rights-protesters/skokie-case-how-i-came-represent-free-speech-rights-nazis