Ethics Assignment 1

  1. Social Media Privacy:
    • Situation: When users create social media accounts, they inadvertently leave a digital trail of personal information, interests, and locations, primarily collected through tracking cookies, geofencing, and cross-site tracking. By agreeing to terms and conditions during sign-up, users grant social media platforms permission to amass this data. This information serves several purposes, such as market analysis, targeted advertising, service customization, and content recommendations, and even a user’s likes and dislikes contribute to shaping their online profile. Companies leverage this data to understand their customers’ interests and personalize ads to align with these preferences. Social media surveys further capture user interests, and this information can be purchased by related companies, allowing them to engage users with posts relevant to their tastes. Additionally, companies pay social media platforms to promote their brands, which results in sponsored content visible to users matching the product’s target demographic. The data used for this targeting is sourced from tracking cookies, shared information, and, if provided, users’ email or phone numbers, enabling companies to deliver tailored information about their products and services.
    • Perhaps the most popular example of social media companies using their software to breach privacy is Facebook’s scandal back in 2018 involving selling their user’s data to a third-party company called Cambridge Analytica. This article does a great job explaining the ins and outs of the situation.
    • In this situation, the software engineer’s behavior involves unauthorized access to users’ private messages and personal data, which is a clear violation of privacy. This act not only breaches the trust of the users but can also lead to various consequences, such as identity theft, harassment, or blackmail. The engineer’s actions are unethical as they involve an abuse of power and trust, demonstrating a lack of integrity and professionalism. The problem is two-fold: the software engineer’s behavior is unethical, as they are exploiting their position and violating the company’s policies and ethical standards. Additionally, the software itself enables this unethical behavior by not having adequate security measures in place to prevent such breaches. The company should have better safeguards to protect user data.
  2. AI Bias:
    • Situation: AI bias is a pressing issue with far-reaching consequences. There is an abundance of evidence that has highlighted the existence and detrimental impact of this problem. Numerous instances illustrate how AI models can inadvertently perpetuate human and societal biases on a massive scale. For instance, the infamous case of COMPAS, a tool used to predict recidivism in Broward County, Florida, disproportionately labeled African-American defendants as “high-risk” compared to their white counterparts. Additionally, a technology company had to halt the development of a hiring algorithm that analyzed previous decisions because it systematically disadvantaged applicants from women’s colleges. The research conducted by individuals like Joy Buolamwini and Timnit Gebru further reveals that facial analysis technologies exhibit varying error rates based on race and gender. In a simple “CEO image search,” a mere 11 percent of the top image results for “CEO” depicted women, despite women constituting 27 percent of U.S. CEOs at that time. These examples underscore that AI bias is not only prevalent but also profoundly detrimental, perpetuating inequality and prejudice in various aspects of our lives.
    • This article in particular is super eye-opening about generative-AI bias.
    • The primary source of AI bias typically stems from underlying data rather than the algorithms themselves. Models are often trained on data reflecting human decisions or second-order effects of societal inequities. For instance, word embeddings trained on news articles can mirror gender stereotypes present in society. Bias can also infiltrate the data during collection or selection processes, as seen in criminal justice models that overrepresent certain neighborhoods due to over-policing, resulting in more recorded crimes and heightened policing. Searches for African-American-identifying names tended to yield more ads featuring the word “arrest” than searches for white-identifying names, potentially due to user interaction patterns influencing algorithmic display. While the software engineer may not have intentionally programmed bias into the system, their failure to identify and mitigate these biases during development is problematic. The problem is primarily rooted in the product itself. The hiring software, due to biased data or algorithms, is inherently unfair. Software engineers and data scientists working on the project should have conducted thorough audits and testing to identify and correct bias in the system. The lack of such due diligence in designing the software makes the product unethical.
  3. Deepfakes:
    • Situation: Deepfakes are a troubling form of media that frequently involve the digital alteration of individuals’ voices, faces, or bodies, making them seem as if they are saying or appearing as something entirely different. These manipulative creations are often harnessed to deliberately disseminate false information or carry malicious intentions, capable of causing harm by harassing, intimidating, demeaning, or undermining individuals. In addition to their deceptive nature, deepfakes contribute to the propagation of misinformation, generating confusion around critical topics. Moreover, the technology behind deepfakes can serve as a catalyst for unethical actions, exemplified by the creation of pornograaphic content, an issue that disproportionately impacts women and raises significant concerns regarding privacy and consent.
    • In fact, deepfakes are conjectured to be so dangerous that the Pentagon is actively trying to combat them. CNN covered this story in a recent article.
    • This scenario involves the development and deployment of software that creates deepfake content without the consent of individuals, often for malicious purposes such as revenge or harassment. The unethical aspect here is evident in the software’s primary function, which facilitates harm to individuals and breaches their privacy and consent. The problematic aspect is primarily within the software’s design and function. Deepfake generators are inherently unethical due to their potential for serious harm and abuse. Software engineers and developers creating such software are directly contributing to unethical behavior by enabling non-consensual and harmful content generation. Their actions are inexcusable from an ethical standpoint.