Ethics 1

Facebook

Facebook’s unethical behavior regarding the Cambridge Analytica scandal raises serious concerns about user privacy and the responsibility of software engineers to prioritize ethical considerations in their work. In 2018, it was revealed that the political consulting firm Cambridge Analytica had obtained data on millions of Facebook users without their consent, which was used to influence the 2016 US presidential election.

The data was collected through a quiz app developed by Aleksandr Kogan, a researcher at Cambridge University, which was used by approximately 270,000 Facebook users. However, the app also collected data from the users’ friends, resulting in the data of millions of users being harvested without their knowledge or consent. The data was then used by Cambridge Analytica to create targeted political advertising based on users’ interests, beliefs, and behaviors.

This behavior is problematic because it violates users’ privacy and undermines the trust that users have in the platform. Facebook’s failure to adequately protect user data and prevent its misuse by third-party developers represents a breach of trust with its users and highlights the need for stronger data protection regulations.

Furthermore, Facebook’s business model of collecting and monetizing user data through targeted advertising is inherently problematic from an ethical standpoint. The platform’s algorithms are designed to track users’ online activity and use that data to show them personalized ads. This creates a tension between the platform’s financial interests and users’ privacy rights, leading to conflicts of interest and potentially unethical behavior.

The problem is not just the behavior of individual software engineers, but also the functions being performed by the software itself. Facebook’s algorithms prioritize engagement metrics, such as time spent on the platform and user interactions, over user well-being and privacy. This can lead to the platform promoting sensational or controversial content that generates more engagement, even if it is harmful or misleading.

Moreover, the complex nature of Facebook’s algorithmic systems makes it difficult to determine who is responsible for the unethical behavior. While the Cambridge Analytica scandal was ultimately caused by a third-party developer, Facebook’s lax data protection policies and inadequate oversight allowed the situation to occur.

In conclusion, the Cambridge Analytica scandal highlights the need for stronger data protection regulations and ethical considerations in software engineering. The problem is not just the behavior of individual engineers, but also the functions being performed by the software itself. It is essential for tech companies to prioritize user privacy and well-being over financial interests and to implement stronger oversight and regulation to prevent future breaches of trust.

Artificial Intelligence

The development of artificial intelligence (AI) raises a number of ethical concerns, such as bias and discrimination, privacy violations, and the impact of AI on employment. One of the primary concerns is that AI systems may perpetuate or introduce biases and discrimination if the data they are trained on is biased or if the algorithms used to make decisions are not designed to be fair and unbiased. For instance, facial recognition technology has been found to have higher error rates for people with darker skin, which could lead to bias and discrimination in law enforcement and other areas.

Another concern is that AI may violate our privacy and individual rights. For example, Clearview AI has harvested billions of photos from social media without users’ consent, raising serious questions about privacy and the potential for abuse. There have also been instances where devices such as Amazon’s Alexa have recorded conversations without users’ knowledge or consent, highlighting the risk of privacy violations.

In addition, there are concerns about the impact of AI on the job market, as the technology may replace human workers in certain industries. For example, self-driving trucks and delivery vehicles may eliminate jobs for drivers, while automated call centers may replace human customer service representatives. Some AI-powered systems, such as predictive policing algorithms, have been criticized for perpetuating racial biases and leading to over-policing of communities of color. Furthermore, Amazon developed an AI hiring tool that was found to be biased against women because it was trained on resumes submitted to the company over a 10-year period, which were predominantly from men.

The problem of unethical behavior in the development of AI is complex and can be attributed to both the behavior of software engineers and the functions performed by the software. Engineers may prioritize efficiency or profitability over ethical considerations, or they may inadvertently introduce biases into the system. For example, a facial recognition system may have a higher error rate for people with darker skin because the training data used to develop the system did not include enough diverse faces. At the same time, the functions performed by AI may raise ethical concerns that go beyond the engineer’s behavior, such as when AI is used to make decisions that have significant consequences for people’s lives.

Addressing these ethical concerns requires a collaborative effort that involves engineers, ethicists, policymakers, and other stakeholders. We must promote ethical AI development practices to ensure that AI is developed and deployed in a way that maximizes its benefits while minimizing its harms. This could involve developing more diverse and representative training data for AI systems, implementing transparency and accountability measures, and ensuring that AI is used in ways that do not unfairly disadvantage certain groups of people. It is important to note that addressing these ethical concerns will require not only changes in software engineering practices but also an understanding of the functions and impacts of AI technology.

Volkswagen

In September of 2015, the Environmental Protection Agency (EPA) uncovered a piece of software that Volkswagen intentionally was using and built to evade carbon dioxide emission regulations. Volkswagen had installed this software in diesel engines in cars being sold in the United States to detect when they were being tested. Being able to detect when the cars were being tested, allowed engineers to manipulate the perceived emissions of their cars. This software was accompanied by aggressive advertising boasting the cars’ low emission rates. This is a prime example of an unethical piece of software that was created with the intent to do harm to the environment, and evade important environmental restrictions in exchange for more capital for Volkswagen. 

The EPA discovered that around 11 million vehicles worldwide had been equipped with the diesel engine device, including the Audi A3, manufactured by Volkswagen. The software itself is hard to imagine. How could a piece of software, equipped in an engine, detect when environmental testing was underway? While all the details are not known, researchers were able to deduce that the engines had a computer software that was able to sense certain conditions that are unique to test scenarios. This includes speed, engine operation, air pressure, and position of the steering wheel. It is typical for cars to be in a controlled environment when being tested, and the cars were able to detect this environment. Therefore, the engines would switch into a more environmentally friendly mode while being tested, but returned to a high emission mode while being driven by a regular driver. What really highlights how unethical this practice is, is comparing the two modes: what these engines emit when the software is enacted and what they emit normally. Studies by the EPA showed that the cars emitted 40 times more nitrogen oxide pollutants than what was allowed in the United States in 2015, all going undetected due to this software. 

The dilemma that Volkswagen presents is who should be held accountable? Did the software engineers that created this program create the problem themselves, or were they told by management to perform this task? Feasibly, this would require a lot of testing and would be a costly operation, so the burden cannot fall entirely on the software engineers, the entire company must be held liable, and was. The next dilemma this situation poses is if Volkswagen was able to do this, what other companies are continuing to successfully do this? Is the environmental testing done on our cars thorough enough? Overall, this software was intentionally malicious and poses a great threat to the environment and automobile industry alike.