Ethics 1

Therac-25Source

In 1982 Atomic Energy of Canada Limited (AECL) created a computer-controlled radiation therapy machine called the Therac-25. The Therac-25 made use of concurrent programming to compute the radiation doses for its patients. However, there were race conditions that existed within the code that sometimes caused patients to receive doses up to hundreds of times greater than normal. These concurrent programming errors would result in death or serious injury, with at least six known accidents occurring between 1985 and 1987.

Software which seriously injures or even kills its users is clearly unethical, but this issue stems from the lack of proper unit tests for the software. Software that physically interacts with humans should be thoroughly tested, especially if failure could lead to the death of a user. This problem lies mostly with the software engineers involved for overestimating the robustness of their code, or with the project manager for allowing the software to be used on people without thorough testing. Besides injuring its users, the Therac-25 software is also unethical because it diminished the trust people have for computer-controlled machines.


Biased Face AI – Source

Facial recognition systems trained and tuned to identify white males might return false positives or misidentify persons of color. According to a hearing at the House Oversight and Reform Committee, Buolamwini from MIT said AI face recognition algorithms were only mainly effective on white men. Buolamwini’s research at MIT also showed that Amazon’s face software Rekognition has particular difficulty identifying women and people of color. This could result in someone being unfairly labeled a criminal for life. 

There are many possible causes of biased face recognition. One possibility is that data with biased labels were used in training. Another possible reason is that the majority of the algorithm developers are white men. As AI plays a more and more important part in decision making, such bias can increase the inequality of gender and race. The social and economical disadvantages these inequalities bring will be used to train the AI and let the AI become increasingly biased. It is a vicious cycle. So, it is important to put the AI algorithms as well as companies who develop AI under regulation to ensure the AI becomes more neutral. It is also important to let the AI developers have more diverse backgrounds.


Data Misuse – Source

In 2018, Christopher Wylie revealed that his former company UK political consulting firm Cambridge Analytica acquired and used personal data from Facebook users. The data was originally collected for academic research. Cambridge Analytica misused up to 87 million facebook profiles. The data misuse and privacy issues triggered a series of online movements, such as the #DeleteFacebook trend on twitter. Facebook CEO Mark Zuckerberg apologized for the situation on CNN and decided to shift the company’s initial focus on data portability to locking down data. 

Data misuse is defined as using data in ways it was not intended for. Many technology companies are also accused of data misuse, such as Twitter and Google. Data misuse violates users’ privacy, which would cause legal issues, heavy fines, and a loss of customer trust in the long term. In order to prevent data misuse, the company should build clear data access requests, set up security policies, and implement identity and access management.