Ethics 1

Example 1: 

The Volkswagen emissions scandal, where a software system was made to cheat the emissions test, is an example of software being engineered for deception, which is unethical. The device, known as a ‘defeat device,’ worked by activating specific equipment that reduced emissions when the vehicle was being tested. If the vehicle was not in a state of being tested, the algorithm would not activate this equipment, and the vehicle would operate normally, producing emissions higher than what was legally acceptable. The problematic nature of this software is that it functions in a deceptive way to bypass legal checks on vehicle emissions, in order to gain an unfair competitive advantage for the company. 

The unethical aspect of this software is both a product of its functionality and that of the engineers themselves. The software had no other practical purpose except deceiving the test, so its function is problematic by itself. The engineers, at least those managing the project, understood that the software could only be used illegally and unethically, but continued through its full production regardless.

https://www.nytimes.com/interactive/2015/business/international/vw-diesel-emissions-scandal-explained.html

Example 2: 

Predictive policing algorithms are another example of how software systems can be used in potentially unethical ways. Predictive policing algorithms are used by several law enforcement agencies to forecast where crimes are likely to occur. Though the premise of these algorithms, analyzing historical crime data to predict future criminal activity, should theoretically help law enforcement allocate resources more effectively, the practical application of these algorithms has revealed deeply troubling aspects related to bias and discrimination.

When historical crime data, inherently influenced by societal prejudices and systemic biases, is fed into predictive policing algorithms, these biases are encoded into the predictions. Consequently, communities that have been historically over-policed due to their racial or socioeconomic makeup are unfairly targeted, perpetuating a cycle of discrimination. This phenomenon raises serious ethical questions about the fairness and justice of law enforcement practices, as these algorithms effectively amplify existing inequalities instead of mitigating them.

The responsibility of software engineers in developing these algorithms lies mostly in understanding the way their work can have real effects on underprivileged communities. A deep awareness of the societal context in which the technology is applied is key. Engineers must critically evaluate the data they use, recognizing its potential biases, and actively work to mitigate these biases during algorithm design and training. Ignoring or neglecting these biases not only leads to discriminatory outcomes but also reflects a lack of ethical diligence on the part of the engineers. Moreover, the ethical implications of predictive policing algorithms extend beyond individual engineers. They underscore the need for comprehensive interdisciplinary collaboration involving sociologists, ethicists, and policymakers.

Overall, predictive policing algorithms can easily become unethical if the software engineers and other professionals involved in their formulation don’t put in a serious effort to account for bias and discrimination when creating them.

https://www.washingtonpost.com/technology/2022/07/15/predictive-policing-algorithms-fail/

Example 3:

While recommendation algorithms have by far increased convenience and minimized the time spent searching for content across a wide host of services, when proper restrictions are not put in place and bias is not actively mitigated, the consequences can be detrimental. Youtube’s suggestion algorithm for what users should ‘Watch next’ has often been the focus of scrutiny, putting it at the center of many scandals. The largest scandal the company has faced over its video recommendation algorithm is its involvement in the rise of what became known as the ‘alt-right pipeline’ in the 2010s. Starting mainly with young boys who would watch gaming content littered with controversial commentary and jokes on various social issues, YouTube’s recommendation algorithms began noticing a connection between this audience and that of more conservative leaning videos. As these audiences began to explore slightly more extreme content, the recommendation algorithms picked up on the growing trend and suggested these videos to users with similar browning behaviors. As a result, YouTube began unintentionally radicalizing thousands of young men by exposing them to right-wing extremists that once were delegated only to the fringe parts of the internet.

The bias and harm resulting from this technological flaw became very problematic and hard to control once YouTube became aware of the full extent of it. With the size of the audience already in the pipeline, to create any sudden changes in the recommendation algorithm to hinder their radicalization would itself be seen as a political bias. As a result, YouTube has since struggled to properly moderate extremist content on its platform, dealing with the Sisyphean burden of preventing the radicalization of teens.

https://www.reuters.com/legal/us-supreme-court-weighs-youtubes-algorithms-litigation-minefield-looms-2023-02-17/