Categories
Uncategorized

Ethics 1

Biased Hiring Practices

As AI has been rapidly expanding, it’s being incorporated into numerous practices – one of those being the hiring process. There is no doubt automation and AI can assist in optimizing initial screening, speeding up the entire process. One of the main ways AI is used in hiring is to scan and analyze resumes, cover letters, etc. for keywords and phrases, and then screen candidates based on that. This practice can unintentionally filter out candidates that don’t fit a cookie-cutter image, negatively affecting candidates with experience gaps or discriminating based on gender, race, socioeconomic status, and age. No systems are perfect, and AI models are infamously imperfect algorithms trained with biased datasets. Further, biases held by engineers are inevitably reflected in the models, a product of human nature. While many may argue the opposite, machine learning engineers can hold biases that are reflected in the systems developed. Because of this bias, this is unethical for use in hiring as it could (intentionally or unintentionally) prevent people from fair consideration for a role and inhibit them from pursuing certain career paths. One specific example of this biased hiring practice was unveiled in a recent Bloomberg study of ChatGPT 3.5. Although ChatGPT may not be the specific AI used during hiring, it serves as a good representative for existing AI models. The study found that ChatGPT held racial and gendered biases against candidates, and found one adversely impacted group for all job listings except one. For example, ChatGPT picked white women for a software engineer role more often, and black women significantly less often.

The problem here exists both in the product and the engineer. The engineer was already touched on, as their biases can become reflected in the system. The data the system is trained on could also be biased, with more representation from white men over other groups of people,  making the AI system more biased towards majority groups. However, the AI system itself removes personality, work ethic, and numerous other factors that systems may struggle to identify. It has created a culture of “beating the bot” on job applications, with candidates working to optimize a resume so keywords get picked up rather than presenting it as a true reflection of their working ability. The choice of companies to use AI makes sense, it’s cheap and effective. However, what is the cutoff point? As AI continues to develop, it’ll be interesting to see the different levels of AI reliance companies have on their hiring practices.

A few articles touching on this topic: https://communicationmgmt.usc.edu/blog/ai-in-hr-how-artificial-intelligence-is-changing-hiring

https://www.forbes.com/sites/jackkelly/2023/11/04/how-companies-are-hiring-and-firing-with-ai/?sh=460a0903593b

https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?embedded-checkout=true

Deepfake technology for nonconsensual pornography

The pervasive use of deepfake technology for nonconsensual pornography underscores a deeply concerning trend in online exploitation. Research suggests that a significant portion of deepfakes, about 90-95% since 2018, are deployed in nonconsensual pornographic contexts. These deepfakes utilize images of women who have not consented and are completely unaware of their portrayal in such a manner. This is an extreme violation of their privacy and autonomy, and is extremely unethical. Further, the misuse extends beyond adults, as deepfaked pornographic materials may also portray children in compromising situations. Even in cases where the depicted individuals are fictitious, the portrayal of child-like images exacerbates the reprehensible nature of these acts. 

Additionally, a manipulated video could easily be distributed widely across the internet, creating a false narrative of the victim’s involvement in inappropriate or illegal activities. The false perception created by the deepfakes that may be entirely fabricated, have the potential to inflict profound harm on individuals damaging their mental health, reputation, relationships and professional life and result in irreversible consequences.

While the technology for AI deepfakes can have many positive applications such as seeing and hearing fragments of a passed loved one, this misuse raises valid ethical concerns as to whether this kind of technology is necessary to have, and what sort of restrictions may be necessary. This is especially true considering that the tools are easily accessible to anyone with access to the internet. 

This situation is largely a result of how the technology is being used. The engineers of these functions were likely not intending for the technology to be used in harmful ways. However, restrictions should be put into place to prevent further unethical use of deepfake technology. There are currently moves towards putting restrictions into place, with a recent instance of faked explicit images of Taylor Swift reinvigorating voices within Congress.

https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf

https://oversight.house.gov/release/mace-deepfakes-pose-real-dangers/#:~:text=It%20can%20be%20used%20to,to%20create%20national%20security%20threats.

https://www.bbc.com/news/technology-68110476

Data Misuse and Privacy Breaches in Apps

In an age of consistent technology consumption, applications, most notably social media apps, collect user data for a variety of reasons, such as personalizing a user’s experience or analyzing audience demographics. However, with this comes an important discussion regarding how to ensure that the data collection is ethical. In these situations, we must consider transparency/the user’s knowledge of what information is being collected, security, access-rights, and so on. Within recent years especially, we have seen many situations and even legal cases that revolve around misuse of data or data collection without the user’s consent. One example was the use of Uber’s ‘God View’ tool in 2016. With this tool, there was no restriction on which employees could see when and where customers were traveling. Additionally, despite ensuring that they would discontinue such access in 2014, there was no further restriction and employees could still access all of this information. Eventually, they further developed this tool to be called “Heaven View”, but there was still not a ton of restriction and employees were caught tracking personal information. An important note here is that Uber was publicly denying the idea of unrestricted access while employees negated their claims of denial. In this case, two main issues occur:  a lack of transparency and the lack of keeping personal information private.

The problem in this case is more due to the software engineer’s behavior, as software engineers are the ones intentionally creating these features that violate the user’s privacy. Further, the lack of transparency could easily be solved if the software companies chose to inform users of the features they were adding. This lack of transparency is perceived as secrecy and causes the user to doubt what information is being collected and what their data is being used for without their knowledge.

Article:

https://www.cosmopolitan.com/lifestyle/a8495499/uber-using-god-view-tool-to-spy-on-celebs/