Ethics Assignment

Ethics Assignment

Cambridge Analytica scandal

Cambridge Analytica, a political consulting firm, collected and exploited personal data from millions of facebook users without their consent to create targeted political ads during the 2016 U.S. Presidential election. This was made possible through a quiz app developed by an external researcher who violated facebook’s terms of service. The app harvested user data, including their facebook id, location, and likes, which was then used to create profiles of users and target them with personalized political ads.

The problematic aspect of this situation is the unethical use of personal data without user consent. The software engineers who developed the app were not necessarily responsible for the unethical behavior, but they did create the technical infrastructure that enabled the collection and exploitation of personal data. The unethical behavior was carried out by the company’s executives and their clients who used the software for nefarious purposes.

Volkswagen diesel emission scandal

In 2015, Volkswagen was caught using software that cheated emissions tests on its diesel engines. The software detected when the car was being tested and adjusted the emissions to meet legal standards, but during normal driving, the emissions were up to 40 times higher than legal limits. The company used this software to market its diesel engines as environmentally friendly and fuel-efficient while ignoring the negative impact on the environment and public health.

The problematic aspect of this situation is the deliberate deception of consumers and regulators. The software engineers who developed the software were responsible for creating the function that enabled the cheating of emissions tests, but the decision to use the software for deceptive purposes was made by the company’s executives. Both the software engineers and the company’s management were responsible for the unethical behavior.

Bias in facial recognition software

Facial recognition software has been shown to have significant biases, particularly against people of color and women. For example, a study by the national institute of standards and technology found that some facial recognition systems misidentified black and asian faces at rates up to 100 times higher than for white faces. This can have serious consequences, such as misidentifying suspects in criminal investigations or denying access to services.

The problematic aspect of this situation is the perpetuation of bias and discrimination through the use of flawed software. The software engineers who developed the software may not have intentionally created bias, but they were responsible for creating the technical infrastructure that perpetuated it. The problem is with the functions being performed by the software, as the bias is inherent in the algorithms used for facial recognition.