Ethics 1

Example 1: ChatGPT

ChatGPT is a tool that we all know and love. On the user side, using it for homework or coding interviews is clearly unethical, but this discussion will focus on the developer side.

ChatGPT is not advanced enough to be considered able to “think.” It has been trained on a large set of mostly correct data so can provide what appears to be thoughtful responses. However, it has no way of knowing whether the data it has been fed is true or not, and as such, it is capable of providing wrong answers.

On startup, there is a small note that says that the information might be wrong. Realistically, few users actually read that note or think twice about it, and plenty of users don’t have enough knowledge about AI to know how and why ChatGPT would provide incorrect information. The website design is somewhat unethical because it doesn’t provide enough easily visible information about the limitations of the software, which is important information that all users should be made aware of. While ChatGPT’s function is not inherently ethical or unethical (which depends on how the user chooses to use it), the developers and managers have a responsibility to users to be upfront about its limitations rather than hiding them.

Source


Example 2: Facebook data breach

In the 2010’s, there was a data breach of up to 87 million Facebook users’ personal data. Specifically, a company called Cambridge Analytica (now bankrupt) gathered data from 270,000 survey respondents using an informed consent process through an app. However, Facebook’s software allowed Cambridge Analytica to gather data on respondents’ Facebook friends as well. The data were detailed enough that Cambridge Analytica was able to construct psychological profiles to determine which form of advertising would be most effective on specific people.

This is all clearly unethical – in addition to breaching 87 million people’s data, the data were also used for nefarious purposes, such as political advertising. The magnitude of this breach was only possible because Facebook’s software allowed another company to gather data on survey respondents’ Facebook friends, which we see no logical reason for. It’s nearly impossible for this to have been caused by negligence, so the main problem is that the product behavior was too intrusive. But certainly the software engineers should have known that too much access was being granted, especially because limiting access/scope is a common practice in code. Though the problem should have been caught at the design level, it should have been caught at the implementation level also since there is always a review process for code, and access could have been easily revoked at any time. The problem here lies in software design and the software developers who chose to ignore an obvious problem.

Source


Example 3: Lyft employees spying

In this 2018 article, a former Lyft employee said that access to Lyft’s backend allowed employees to “see pretty much everything, including feedback, and yes, pick up and drop off coordinates.” When asked whether employees abuse this privilege, they replied “Hell yes. I definitely looked at my friends’ rider history.”

This is unethical because the employee’s friends did not consent to location stalking by specific Lyft employees, especially friends, when using the app. Although working on the backend requires access to user data, software engineers shouldn’t be able to look up a specific person’s data by their name or any personally identifying attribute (in this case, any unique attribute that a friend would know). Certainly, it should be possible to anonymize data by associating names with id numbers but hiding the names from most SWE’s. Even if this would be impossible, then logs of lookups should be kept, and repeated searches should be investigated. Overall, this seems like a problem with software design and enforcement of rules.

Source


Google Doc