Ethics

Amazon Gender discriminating Resume algorithm (here)

Amazon developed a hiring algorithm that was intended to decide which applicants would be hired based on their resumes. The idea was to use this system to automate the hiring process; however, the system was biased towards men due to the data set that it was trained on.

Why does it seem unethical?

This seems unethical because a process intended to be implemented into the hiring process was actively discriminating against women. The process was intended to replace a massive part of the manual review in hiring. The changes made to improve the discrimination of the algorithm could not guarantee that it was completely fixed. This means that if implemented there would be no way to 100% prevent discrimination.

The problem (engineer vs. product)

The problem with this example was that the algorithm was based on historical data that did not have equal representation (resumes of people amazon had hired in the past). Due to the lack of equal representation in the dataset the biases in the data set were adopted by the algorithm. Efforts to remove these biases were acknowledged to not be fool proof in preventing discrimination and thus the idea was scrapped.

Uber God View (here)

Uber’s God View, a tool that allowed Uber employees to track the real-time location of Uber drivers and customers, is a notable example of a software system that raised ethical concerns. This tool was intended for purposes like customer support, but it was misused on several occasions, demonstrating behavior that many consider unethical.

Why does it seem unethical?

Uber’s God View is unethical for two primary reasons: individual privacy violations and blatant abuse of power on Uber’s part. The former is quite straightforward – users were not informed that Uber was tracking their location at all times, in fact, they were told the opposite. Users have a reasonable expectation that their location data will be used responsibly and only for legitimate purposes. Unauthorized tracking of users without their knowledge or consent is a clear breach of privacy. Uber also abused their power as owners of the app; while there are debates on what the creator of an app can or cannot do, this case is a clear extension of the rights of the employees. While the tool was designed with good intentions, the unethical behavior stemmed from individual employees misusing their access privileges.

The problem (engineer vs. product)

In this case, both the engineers and the product are at fault; the engineers clearly neglected to consider ethical concerns and abused the platform, but the boundaries of the product were not defined well enough to limit that power. Engineers play a crucial role in designing systems and setting access controls – the misuse of God View was facilitated due to lax access controls or inadequate training about the ethical use of such tools. Thus, there is a responsibility on the part of the engineers and the company’s training programs to prevent such behavior. However, the design and functionality of the God View tool also played a role; even though the tool had legitimate use cases in customer support, its design lacked proper safeguards to prevent unauthorized access. 

OpenAI DALL-E (here) (here)

DALL-E is a generative AI tool that creates unique images to look like professional artwork or realistic photographs. These images are generated using a neural network, a deep learning technique, from natural language text prompts given by a user. This text-to-image model has been trained on millions of artwork and photographs produced by humans and other licensed stock images from websites like Shutterstock. DALL-E 3 recently became available on Bing for free in early October 2023 and is set to become available to paying users of ChatGPT Plus and Enterprise later this month. 

Why does it seem unethical? 

The developers of DALL-E knew that this tool would likely be used by bad actors. A potentially problematic use case that they cited would be the creation of images that would spread disinformation. Therefore, they initially restricted access to the public and only allowed researchers, academics, journalists, and artists to use the tool while they figured out how to refine their content rules. Now, they have guardrails in place to prevent the tool from producing anything when given a prompt that contains violent, sexual, or hateful content. The tool will also deny requests to generate images of public figures or produce artwork in the style of living artists. This tool has received backlash from artists who fear they will be replaced or their style will be replicated unethically. Furthermore, the images that are used to train the model poses an ethical concern as well. Artists and photographers can choose to opt out of having their work used to train future models, however, many feel they should have to opt in instead. Copyrighted works have also been used as part of the training data which has sparked lawsuits regarding copyright infringement of DALL-E and similar tools.  

The problem (engineer vs. product)

In this case, the problem largely lies with the product itself. Text-to-image models require data to train on. The selection and use of data itself presents ethical concerns related to copyright infringement, bias, and the replication of works created by humans in any capacity. Text-to-image generators will likely always spark backlash among creators. However, the engineers exacerbate this problem by not consulting the creators first before using their work in the training data. Another problem inherent to the product is how the product will be used. A tool like this has the potential to cause harm if it is not used appropriately or is used by a person with bad intentions. Engineers have tried to mitigate this problem by limiting the prompts that can be used and taking the time to research and anticipate potential misuses prior to releasing the tool to the public. As the product becomes publicly available, the engineers will have to do their due diligence to monitor and respond to problems that may arise as use becomes more widespread.