Ethics Assignment

  1. Racial discrimination in face recognition technology
  2. Artificial intelligence in the justice system
  3. Audio, video, and image manipulation to create deepfakes using AI

Racial Discrimination in Face Recognition Technology

The use of facial recognition technology is highly controversial. While facial recognition algorithms have been thought of as highly accurate in their classifications, it has become increasingly clear that this high level of accuracy is not found across all demographics. MIT researcher Joy Buolamwini found this out when working on a project as a graduate student in the Media Lab. Buolamwini, who is black, discovered that a facial-analysis program was unable to detect her face until she put on a white mask. After further analysis, she discovered that this phenomena was consistent throughout many facial-analysis and recognition programs.

The data that these algorithms are trained on is largely made up of images of white men and, because of this, it has been found that facial recognition often fails to correctly identify those in underrepresented groups. In fact, researchers have found that most facial recognition algorithms falsely identify Black and Asian faces more often than white faces, and black women are especially vulnerable as these technologies tend to falsely identify women more than men. These issues are particularly concerning when we consider the circumstances where this technology is being used. The lack of diversity in the data being used to train facial recognition algorithms is a major point of concern when it comes to the future of facial recognition. Companies and organizations are quickly adopting this technology, so it is crucial that engineers take steps to recognize and remove bias from these systems to increase fairness and inclusion in the digital world.

Joy Buolamwini: How I accidentally became a fierce critic of AI (bostonglobe.com)

Study finds gender and skin-type bias in commercial artificial-intelligence systems | MIT News | Massachusetts Institute of Technology

Artificial Intelligence in the Justice System

The use of artificial intelligence in the justice system raises ethical concerns in regard to biases. As dicussed before, facial recognition has a tendency to misidentify people of color. This has contributed to several false arrests in the past few years, where dark-skinned people have been wrongly accused of commiting a crime after the police used facial recognition technology to match an unknown suspect’s face to photos in a database. However, arrest by law enforcement is not the only place in the criminal justice system where artifical intelligence is being used. In some courtrooms, AI algorithms being used for sentencing and risk assessment. While this technology can be used to help judges determine appropriate sentences and evaluate the likelihood of someone engaging in criminal behavior in the future, it relies solely on historical data. This becomes a problem when we realize that low-income and minority communities have been disproportionately targeted by law enforcement. By picking out the patterns in this historical data, these algorithms could perpetuate the bias that has been seen throughout our history. Because of these concerns, it is crucial that artificial intelligence training data be diverse and engineers must continuously strive for fairness when designing these algorithms. An example of wrongful detainment as a result of AI is Eight Months Pregnant and Arrested After False Facial Recognition Match – The New York Times (nytimes.com)

Audio, Video, and Image Manipulation with Artificial Intelligence

New advancements in audio, video, and image manipulation with the help of artifical intelligence have made it increasingly simple for any user to create and distribute deepfakes quickly and at a large scale. While deepfakes, a type of synthetic media created by artificial intelligence/machine learning, can be used in positive ways, the main ethical concern is when it is used maliciously. This media can be used blackmail, intimidate, or humiliate someone. It can create false narratives and further divide opposing parties at any scale. The implications of this could be disastrous to individuals and large organizations or institutions. For example, deepfakes in the political scene can have a substantial impact on the outcome of an election. Members of opposing parties can create deepfake audios or videos of other candidates to harm the reputation of that candidate to influence voters in a certain direction. Or they can create deepfakes to spread misinformation about pressing political matters, opposing candidate’s, and opposing campaigns. An example of this would be a deepfake video of Volodymyr Zelensky declaring that the Ukranian-Russian war was over, a fact that many knew was untrue. Individual’s lives have be upended after discovering deepfakes of them on adult sites. The false information that synthetic media can spread is an issue that creators and media platforms must work to combat. Engineers creating the algorithms for these deep generative models may not be able to prevent their tools from being used maliciously, but platforms that distribute media can protect users from misinformation, humiliation, blackmail, and intimidation by putting in place policies and controls that limit the sharing of this content.

Deepfakes – The Danger Of Artificial Intelligence That We Will Learn To Manage Better (forbes.com)

Debating the ethics of deepfakes | ORF (orfonline.org)