Ethics I

I. Weakening Cryptography

Cryptography is one of the most important recent developments in computer science, which helps preserve the privacy of users whether it be from hiding web traffic or anonymizing data collection, to simply encrypting files. During the late 1900s, cryptography was developed to a point where it was strong enough that it couldn’t be broken by large organizations like governments. However, the US government did not like the possible restriction of their surveillance capabilities, so when Netscape developed their browser, they had to make an ‘International Edition’ that had dramatically weaker cryptography – one that could be broken in a few days by even a personal computer. 

This was clearly an unethical policy that was enacted in several deployed softwares, as it deliberately goes out of its way to weaken an existing encryption scheme specifically so government agencies are able to snoop and compromise the security of non-citizens. Everyone has a right to privacy, and while weakening it already is questionable, hiding it behind a guise of national security while making it seem that there is still a reasonable level of encryption is definitely unethical. As it was a government imposed policy, software developers had no say over the actual implementation, and just had to implement as specified.

[Source]

II. Bias and Misinformation of ChatGPT

ChatGPT has had incredible popularity and success in 2023 and for good reason. However, there are apparent biases and misinformation present. Many people trust ChatGPT as a source of objective information despite warnings against doing such. Political bias has shown itself when answering questions concerning certain recent presidents, as well as other controversial issues. Most blatant issues that went viral on social media were quickly updated and fixed, however, the fact still remains that political bias has existed and surely still exists through some prompts. This is unethical because despite the rightness or wrongness of ChatGPT’s answers, a tool such as ChatGPT should not have bias at all. Furthermore, bias in a tool so widely used raises the questions: what amount of bias is acceptable and how do you measure that? And how much power does this give OpenAI should they hypothetically decide to push an agenda? Just as with other information sources such as news and media outlets who have extensive codes of ethics, should OpenAI also be held to similar standards? Similarly, as far as misinformation goes, most college students recognize that ChatGPT is unable to solve complex problems in their homework, and professionals understand that even the code it writes is riddled with bugs. Despite still being a very useful tool, should the pitfalls of its answering abilities be advertised more? As a relatively new technology, the fault is not completely on the engineers, unless OpenAI is deliberately intending to mislead, which I do not think is the case as ChatGPT continues to work on and evolve their technology to keep getting better. Additionally, none of the rival AI chatbots are without flaws either which supports the fact that OpenAI is likely not intending to directly bias or mislead its audience and as a result this is more of a functional problem rather than on the engineers.

[Source 1]

III. User Exploitation in Social Media

Social media was originally created with the intent of revolutionizing how we communicate with others online. However, as social media has grown in popularity and become a huge success, it became an opportunity for revenue for both the companies running various platforms online and companies trying to sell products to consumers. Due to this, tech companies began to design their platforms to keep their users engaged and maximize profits. A Netflix documentary titled “The Social Dilemma” reveals many of the practices being performed by social media companies, such as implementing algorithms that will generate user-specific advertisements and content in order to fuel more engagement. Companies such as Facebook have been suspected of going as far as “listening in” on users to generate ultra-specific content relevant to them for the purpose of further engagement on the platform. While these accusations have not been proven, it raises questions as to whether such platforms are not only manipulating users into becoming addicted but also violating their privacy, both of which can be considered highly unethical.

Excessive social media use has been associated with negative effects on users’ lifestyle and mental well-being. While companies are aware of this, they continue to get away with their practices because there haven’t necessarily been deaths directly linked to social media use. The issue of who to blame for this problem is fairly complex as blame can’t be fully placed on the engineers. Software engineers at social media companies are responsible for coming up with algorithms to successfully exploit user attention and stimulate addictive behavior, but the decision to increase profit in this manner is ultimately being made by business and financial executives at these companies. The system in itself is inherently flawed and needs to be better regulated in order to prevent further methods of manipulating users and compromising their privacy. 

[Source 1] [Source 2]