Ethics 1

Example 1: Injustice in Technological algorithms

A situation that we consider to be unethical is predictive policing algorithms. In these algorithms, police officers are given biased historical crime rate data. Therefore, they are being placed in areas that contain marginalized communities and are typically already over policed. When there are more police officers in an area, there are more reported instances of crime rates compared to areas where police officers are not assigned. This creates a feedback loop where these communities will have a higher and higher crime rate. Instead of replicating historical and contemporary evidence of injustice, engineers should offer unbiased and equal data to AI systems to promote a society of fairness. This situation is unethical because crime should be recorded fairly and without bias. Additionally, the problem is systemic in our legal / law enforcement systems, but it would be the engineers ethical responsibility to bring awareness to the problem rather than blindly implementing.

Example 2: Racial Discrimination in Face Recognition Technology

Another situation that we consider to be unethical is inequality in facial recognition. Specifically, one case was with google pixel cameras not handling darker pigments properly. This seems like an error from omission designing in a bubble of the engineer’s social-cultural circle and not sufficiently evaluating / testing for their whole market. This instance of biometrics at its current state of implementation is shown to demonstrate racial bias toward Black Americans. Test cases show that these softwares as incorrected matched people of color to mugshot images disproportionately more times than it does for does who are not people of color. Software engineers should be pushing to train their machine learning algorithms with equal data sets so that one race is not overly represented than others. Proper training with an adequate amount of datasets would better the outcomes of facial recognition.

Example 3: Non-consensual Deepfake Pornography

The final situation that we consider to be unethical is non-consensual deepfake pornography. Deepfakes are commonly used to digitally manipute a person’s face or body so that they look like someone else. This technology is unethical because it has spread to pornography and poses as a danger toward someone’s career, reputation, and emotional state. Although, many would believe this technology might have been better to not develop / release in the first place. However, like much AI research, we often race to release products with out thinking about the consequence out of fear of missed opportunities.We consider this technology unethical because it is typically done without a person’s consent and additionally, it can be used against anyone who is photographed. While developing this software, software engineers need to consider all potential consequences, especially because deepfakes can be used to spread false data and hoaxes.