Ethics 1

https://www.cs.unc.edu/~stotts/COMP523-f23/ethicsAssn.html

Examples:

  1. Bias in face recognition – Karen
  2. Generative AI replacing creative occupations – Isaiah
  3. Predictive Policing Software – Aryonna

1. Bias in Face Recognition

Facial recognition is currently being applied for a multitude of uses– from unlocking your phone to assistive technology to helping U.S. federal agencies identify people. Aside from the ethical concerns that surveillance software presents, facial recognition software has shown to be biased, identifying people with darker skin tones at a significantly lower accuracy rate. According to an article by Harvard PhD candidate Alex Najibi, across three different facial recognition algorithms, darker-skinned females had a 34% higher error rate than light-skinned males. This has very alarming implications as the software can potentially target marginalized populations in very serious scenarios.

When developing such software, engineers were not training their algorithms on very diverse datasets. The majority of datapoints were those of white males, and datapoints of people with darker skin tones were lower quality images. Software engineers were not being intentional in the way they developed the software and did not consider how these deficiencies would affect specific populations. There should have been more consideration on how the software would perform across all users, ensuring accuracy for anyone who uses it.

2. Generative AI Replacing Creative Occupations

There is currently a lot of interest in the capabilities of generative AI in making things like scripts, art and videos. There are two problems that arise with this idea: 1. the replacement of jobs in favor of AI automation and 2. the stealing of the work of other people. Both of these problems make this use of generative AI unethical as messing with the livelihood of a person in favor of (arguably worse) auto generated scripts and/or art isn’t right. Stealing work isn’t right either, but oftentimes larger models, specifically ones that generate art, will use anything that is posted on the internet as training data for the models, regardless of if the owner of those items wants their things to be used as training data. Two recent examples of generative AI replacing creative jobs happened at Marvel for two Disney+ shows, Secret Invasion and Loki. The intro credits for Secret Invasion were generated using AI and some marketing posters for Loki were generated using AI. This generated a lot of discussion online as many people don’t like the idea of using generative AI in these shows rather than paying for an artist to create art with similar effects.

This situation is both a problem of how it’s being used and how it was built by engineers. The problem of generative AI replacing people is solely the fault of greedy people who use it, they don’t feel like paying workers and see this as a cheap alternative. The problem of generative AI stealing the work of others is the fault of the people who build these models, they shouldn’t just pull from anything they can find on the internet, they should instead find things that anyone is allowed to use for any purpose or things that the creator has allowed to be used in the training of AI models.

3. Predictive Policing Software

One clear example of an ethical issue related to race in computer science is the use of predictive policing software. These systems, designed by software engineers, aim to forecast potential crime hotspots based on historical crime data. However, they often perpetuate racial biases that exist within the criminal justice system.

The problem here lies in both the software engineers’ behavior and the functions of the software. Software engineers may unintentionally introduce bias during the development process by using historical data that reflects systemic racial disparities. Furthermore, these systems might not be designed to address these biases, creating a cycle that unfairly targets communities of color.

One specific example that demonstrates the issues with predictive policing software is the use of the PredPol system. PredPol is a software platform designed to forecast where crimes are likely to occur based on historical crime data, such as the location and time of past incidents. The LAPD used PredPol to allocate resources and patrol areas with a higher likelihood of criminal activity. The problem arose when the system relied heavily on historical arrest data, which, due to systemic biases in law enforcement, resulted in an over-policing of predominantly minority neighborhoods. This led to more arrests in these communities, creating a feedback loop of increased data input and subsequent predictive action, disproportionately impacting people of color.

The unethical aspect is the perpetuation of systemic racial discrimination and the potential for profiling based on race rather than actual criminal behavior. This highlights a broader issue in computer science where the developers’ and software’s behavior intersect, leading to negative consequences. It’s crucial to address these ethical concerns and ensure that software engineers are aware of the implications of their work and actively strive to eliminate bias in their algorithms, especially when dealing with sensitive issues related to race.