Detecting Sexual Orientation Through Facial Recognition [HERE]
This paper used deep neural networks to classify the sexual orientation of people from facial images. The dataset for this paper was obtained from public profiles on a US dating website, and it claims the model was able to achieve higher accuracy than humans for this task.
Why does it seem unethical?
Firstly, it’s an invasion of privacy for people. Using facial recognition to determine one’s sexual orientation without consent is a violation of privacy. People should have the right to keep their sexual orientation private and not have it analyzed or guessed based on their appearance.
Secondly, the model is inaccurate and biased. The image below shows the composite faces and the average facial landmarks built by averaging faces from the paper.
We can immediately see that some of the differences between heterosexual and gay individuals are superficial. For example, glasses are clearly visible on the gay man, and eyeshadow makeup is visible on heterosexual women. This indicates that the machine learning algorithm didn’t learn from facial structure but rather from the superficial patterns from the narrow domain of profile images on a dating website.
Engineer vs. Product Behavior
Such work raises serious ethical concerns and reinforces potentially harmful stereotypes. The ethical problem arises from the engineers’ behavior. The lack of privacy awareness and biased views from the researchers led to the production of such a poor model.
Cambridge Analytica and Facebook Data Scandal [HERE]
Cambridge Analytica, a British consulting and political data analytics firm, acquired personal data from nearly 87 million Facebook users without their consent. This data was mostly used to target voters with political advertisements during significant events like the 2016 U.S. presidential election as well as the Brexit referendum. The firm harvested data through a quiz app called “This Is Your Digital Life” which not only collected data from the app’s users but also from their Facebook friends via Facebook’s Open Graph platform, without explicit consent.
Why does it seem unethical?
This situation was highly unethical because it involved the collection and manipulation of personal data without user consent, on top of using said data to influence political outcomes. Not only was the privacy of millions of users compromised but their data was used in ways they did not agree to. The ethical principles of autonomy, consent, and privacy were violated, highlighting a significant disregard for individual rights and the integrity of democratic processes.
Engineer vs. Product Behavior
The problem of the incident lies in both the behavior of the engineers as well as the design of the software systems. Engineers and data scientists at Cambridge Analytica purposely designed and implemented methods to exploit Facebook’s data in ethically questionable ways. Meanwhile, Facebook’s platform permitted extensive data access to third-party applications without ensuring clear, informed consent from users. This design flaw, coupled with Cambridge Analytica’s conscious efforts, facilitated the unethical transfer and usage of data. Despite efforts to address these issues, the incident revealed profound ethical lapses in the tech industry’s approach to data privacy and user consent, leading to widespread criticism and calls for stricter regulation.
Clearview AI [HERE]
Clearview AI is a facial recognition technology company that has compiled a massive database of over 20 billion images scraped from public web sources like social media sites. Their software allows users, primarily law enforcement agencies, to upload an image of a person and match it against this database to potentially identify them.
Why does it seem unethical?
The lack of oversight and transparency around how this powerful surveillance tool is being used raises serious concerns about the potential for abuse, false identifications, and the chilling of free speech. More specifically, given that facial recognition technology is inherently biased and susceptible to errors, we should ban its use in high stakes situations. It should be noted that Clearview’s top client is the US Government, which uses the software to identify criminals, and that most facial recognition technologies fail at recognizing people of color. For instance, researchers and civil rights advocates have demonstrated that these technologies have even mistaken dark-skinned members of congress for convicted criminals. This use case of the product; however, is not an outlier in how it could deleteriously impact society, as the technology could also be abused in the wrong hands. For instance, people can end up identifying victims of revenge-porn or governments can use this to find the identities of protestors.
Engineer vs. Product Behavior
Clearly the blame lies on both the engineers and the product. Specifically, the engineers continued to develop the product in spite of the deleterious use cases that existed, and as a result, the product is now harming individuals (with people of color being affected the most). Moreover, the fact that the training data of the facial recognition model is biased is indicative of the fact that the product is also at fault here. It should be noted; however, the responsibility of this model should be placed on the engineers and founders of the company as they made the choices on how to build the system and who to sell it to.