Ethics 1

Software Engineers work under various different ethical codes. But sometimes, the software they develop does not uphold these ethical codes and displays unethical behaviors. Research 3 examples of various software systems, both past and present, that have been accused of unethical behavior or practices. Evaluate each example in terms of the impact it has had on individuals, communities, or society as a whole, identifying the specific unethical behavior exhibited by each software system and how it violated ethical principles or standards. Finally, analyze the reasons for the unethical behavior, including factors such as the technology itself, the development process, and the motivations of those who created or used the software.

Algorithmic Bias

Unfortunately, biases exist in humans, which can manifest both consciously and unconsciously through societal, familial, and social conditioning. These biases can then be perpetuated in artificial intelligence systems that are developed and trained by humans. When AI systems are trained on biased data, including historical inequalities and discriminatory metrics based on factors such as race, gender, nationality, and sexual orientation, they too may exhibit these biases. 

One of the most popular examples of a software system displaying this unethical behavior is the 2015 Google Photos racial incident.  Google Photos uses a Convolutional Neural Network (CNN) with image recognition to label photos with what is shown in them. However, this algorithm was found to be racist when it labeled photos of a black software developer and his friend as gorillas. In a poor attempt to “fix” this, Google removed gorillas and monkeys from the CNN vocabulary, only temporarily solving the issue and not addressing the underlying problem with image labeling technology being imperfect and dependent on its training with no ability to identify real-life corner cases. 

Algorithmic bias is a complex issue and there is not a single entity or individual who can be solely held responsible for it. It is a systemic problem that arises from the intersection of several factors including the data used to train AI models, the algorithms themselves, and the societal biases that are present in the world. The creators of AI systems, including data scientists and engineers, are responsible for designing and developing the algorithms and ensuring that they are not biased. However, the limitations of the training data and the challenges of detecting biases in AI systems can make it difficult to create completely unbiased algorithms. In some cases, the organizations or companies that use AI systems and their decisions can contribute to algorithmic bias. For example, if a company uses biased training data, or if its algorithms reinforce existing inequalities, it could be seen as responsible for the algorithmic bias present in its systems.

Security/privacy

Security and privacy are extremely important aspects of the Internet as we know it. Tons of sensitive information gets passed through online every day, so it is important that software is able to ensure security and protect individuals’ rights to confidentiality and autonomy. Breaches of security and privacy can result in harm to individuals, such as identity theft, financial loss, and exposure of sensitive personal information. They compromise users’ safety, putting them at risk of identity theft, fraud, and other types of cybercrime. In addition to financial harm, these attacks can cause significant emotional distress and damage to an individual’s reputation. Therefore, ensuring the security and privacy of software is crucial in promoting trust, transparency, and respect for users’ rights.

In 2013, a data breach at Yahoo affected every single user account, which at the time was estimated to be 3 billion. The stolen data included users’ names, email addresses, phone numbers, dates of birth, and security questions and answers. This was the largest data breach in history, and raised concerns about the security practices of large technology companies, as well as the potential consequences for affected users, including identity theft and phishing attacks. Following the second breach in 2014, Yahoo found that “certain senior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team.” This becomes an ethical issue since user’s information was severely breached and staff did not properly prepare and protect that information.

In cases of data breaches like the Yahoo attack, responsibility can be attributed to multiple parties, including the company, the hackers, and sometimes even individual users. Preventing attacks like the Yahoo data breach is ultimately in the hands of the developer and company as it requires implementing a combination of technical and organizational security measures such as keeping software and systems up to date with the latest security patches and updates, using strong passwords and two-factor authentication to protect user accounts, etc.

Data collection

Data collection refers to the process of gathering and analyzing information from various sources, including customer behavior, market trends, and other relevant data points. This information is then used to make informed business decisions, such as developing new products, optimizing marketing strategies, or improving customer experiences. One of the most popular examples of data collection being unethical, is in the form of dynamic pricing. Dynamic pricing is a pricing strategy that allows companies to adjust their prices in real-time based on various factors such as supply and demand, customer behavior, and market trends. This strategy involves constantly monitoring market conditions and making rapid adjustments to pricing to maximize profits. Data collection plays a critical role in dynamic pricing by providing companies with the information they need to make informed pricing decisions. By analyzing data on customer behavior, companies can identify trends and patterns that can inform their pricing strategies. For example, if a company notices that customers are more likely to buy a particular product at a certain time of day, they can adjust their pricing to reflect this trend.

While dynamic pricing might seem like a fair and reasonable pricing strategy at first glance, it can become problematic when there are no limits to the prices that can be charged. When algorithms are allowed to operate without any caps on prices, they can end up exploiting people in ways that feel unfair and abusive. For instance, it might be reasonable for prices to surge after a football game when lots of people are leaving the same location. However, it would be quite another matter for prices to rise based on factors like a low phone battery or bad weather at night.

This problematic use of dynamic pricing occurred during the Brooklyn subway shooting last April, where Uber prices more than tripled as people tried to leave the area safely. Uber has been accused of increasing prices to exorbitant levels, taking advantage of people in desperate situations and of using various tactics to artificially create high demand, such as sending notifications to users encouraging them to request rides during peak periods. This kind of behavior seems to result from both the software and the engineers who wrote it. On the one hand, the software is designed to respond to a spike in demand that occurs due to an event happening in a specific location. On the other hand, not having any upper limit to surge pricing during an emergency like a shooting seems like something that engineers should have accounted for in their algorithm design.

While dynamic pricing can be a useful tool for companies, it should be used ethically and transparently to ensure that customers are not unfairly exploited. Businesses and engineers should consider setting upper limits on surge pricing during emergencies to ensure that their pricing algorithms are both fair and effective.