As a team, discuss situations in which a software system was created and demonstrates some behavior that you consider unethical.

Explain how your example is problematic. Discuss why it seems unethical, and discuss if the problem is the software engineer(s) behavior, or the functions being performed by the software… perhaps both. Also the problematic aspect of your situation might not fit into this division… engineer vs. product behavior.


Example 1: Mass Surveillance

Consider this: The year is 2024 and you live in one of the most populous places in the world. Although it seems you could disappear into a sea of people, the government can still track your every move. They know where you are, what you’re purchasing, who you interact with, if you commit a crime, if you disrespect the government in a little way. This list is only a small portion of what this country’s government can monitor and our team believes that this kind of surveillance is incredibly unethical to the denizens of this country.

If you have not guessed it yet, the country being referred to is modern day China. China has an estimated population of 1.412 billion people and over 700 million cameras. That means there is one lens for every 2 people in China! These cameras are quite literally everywhere and also come equipped with facial recognition so they can immediately identify anyone walking around on the street.

The history of the cameras start with the Maoist era after the establishment of the People’s Republic of China in 1949. The camera system was invented to control the people of China and strengthen Mao’s power. It is difficult to ascertain whether the software engineers involved in the creation of this mass surveillance were forced or coerced into doing so by Mao. Regardless, the system was created for the unethical purpose of control and power gain.

While it is a little unclear what the Chinese government exactly does with all the information they collect with these cameras, this data collection could pose a huge violation of the privacy of Chinese people. Some may argue that it is a good thing that the government can monitor its people in order to stop the crime in the country. While this may seem like a huge benefit of mass surveillance there is the risk that the government who is in charge of the cameras abuses its power such as Mao did. The government may deem other political parties within the country dangerous criminals and use the cameras at their disposal to hunt down opposition for example. If someone were to speak out against their breach of privacy, what is stopping the Chinese government from using their cameras to find that person and silence them?

In this case, the problem does not lie with the software engineers developing the software for the cameras, but with the functions being performed with the cameras. It is no secret that people do not enjoy being spied on. This brings the question of whether it is ethical for a government superpower to strip citizens of their privacy for the benefit of themselves. Mass surveillance sounds good on paper but with that power in the wrong hands it can lead to a dystopian world where people are fully controlled by the government.

Sources:

https://www.firstpost.com/world/big-brother-is-watching-china-has-one-surveillance-camera-for-every-2-citizens-12380062.html#:~:text=The%20country%20has%20over%20700,over%201.45%20billion%20(1%2C454%2C507%2C737).


Example 2: Lockheed Martin 

Working as a software engineer for Lockheed Martin can present several ethical and moral dilemmas, which can be attributed to both the behavior of the engineers and the nature of the products they work on. Lockheed Martin is primarily a defense contractor, producing military hardware and software. The products developed by Lockheed Martin, such as weapons systems, surveillance technology, and other military equipment, are designed for purposes that inherently involve violence, destruction, and potentially loss of life. This raises ethical concerns about contributing to industries that profit from war and conflict, and may be perceived as supporting militarization and aggression.

Software engineers at Lockheed Martin are tasked with developing and maintaining systems that facilitate military operations, implicating them in the potential consequences of their use, including civilian casualties, human rights abuses, and exacerbation of global conflicts. While individual software engineers may not have direct control over how their work is ultimately used, they are still complicit in the development and deployment of technology that can have devastating consequences. 

Software engineers, like all professionals, have a responsibility to consider the ethical implications of their work and to take steps to mitigate harm. This includes advocating for the responsible use of technology, questioning the purpose and consequences of their projects, and potentially refusing to work on projects that violate their moral principles. However, in a corporate environment like Lockheed Martin, where profit and national security interests often take precedence, employees may face pressure to prioritize project completion and company objectives over ethical considerations.

In summary, working as a software engineer for Lockheed Martin can pose ethical challenges related to the nature of the products being developed, the role of engineers in facilitating their development, and the potential for misuse or unintended consequences. While individual engineers may not be directly responsible for the actions of their employers, they nonetheless have a moral obligation to critically assess the impact of their work and consider the broader ethical implications of their contributions to technology development.


Example 3: Stable Diffusion

Stable diffusion models are a type of generative AI that is able to generate images, videos, and animations from text and image prompts. While this new open-source technology can be used as a tool in the field of digital art and content creation, several ethical issues arise. One large concern is the potential for these models to generate explicit and harmful content such as deepfakes. 

Deepfakes are synthetic media that have been convincingly digitally manipulated to misrepresent someone as doing or saying something that they did not. Though some safety measures have been implemented in some of these models, these safety filters may not be foolproof and users may be able to bypass them. The creation of deepfakes without consent can lead to privacy violations and even fraudulent behavior. There is no federal law against disseminating this type of content, but this may fall under a patchwork of existing privacy, defamation, or intellectual property laws. 

The problem here is not necessarily on the developers of stable diffusion technology, but rather the possible harmful implications. Stable diffusion is a powerful technology that, in the wrong hands, can be incredibly harmful. Already, examples of deepfake technology are all over the news, from the generation of explicit images of pop stars like Taylor Swift, and this is likely only the beginning. While the original researchers and developers behind stable diffusion models are not directly to blame here, it is unlikely that this outcome was a complete surprise. 

Some people have even taken this further, developing an app called Clothoff, which does exactly as it sounds. In the case of this app, the developers knew exactly what they were doing and the implications of this kind of technology, and chose to make it widely accessible. This is an example of an extremely unethical and quite horrifying application of stable diffusion technology. 

As developers, it is important to understand and be aware of the possible implications of the tools we develop, as well as how others may use our tools to further an agenda that we may not agree with.

https://www.merriam-webster.com/dictionary/deepfake

https://www.theguardian.com/technology/2024/feb/29/clothoff-deepfake-ai-pornography-app-names-linked-revealed

https://www.secoda.co/glossary/what-are-stable-diffusion-models#:~:text=Yes%2C%20the%20use%20of%20Stable,generate%20explicit%20or%20harmful%20content.

https://en.wikipedia.org/wiki/Stable_Diffusion