Ethics

Algorithmic Bias Example: Facial Recognition Software

One example of unethical behavior in a software system is racial bias against colored individuals in facial recognition software. There have been many upcoming new uses for facial recognition technology, especially in terms of law enforcement. Facial recognition technology is currently being adopted to store databases of faces to help identify non-law-abiding citizens by biometric facial scans. This algorithm accurately distinguishes different features in white male and female faces, but the accuracy decreases significantly when the facial recognition software is applied to colored faces. It even goes so far as to be about 32% less accurate when applied to colored female faces compared to white male faces. This creates a severe problem because if law enforcement is implementing this software into their systems the decreased accuracy with colored individuals could potentially result in the algorithm confusing one law-abiding individual with another non-law-abiding citizen. Racial bias in this algorithm could be attributed to both the developer as well as the algorithm itself. When developing the software, the engineer could have used metrics such as eye color, skin color, and the width and height of specific facial features such as noses and lips. Although, this may not be enough. Different races have different proportions in certain features and different skin tones have different textures which may not be able to be picked up by the algorithm. Also in terms of sampling, the engineers may not have diversified the races of individuals they used to train the model. Therefore, there seems to be the engineer’s bias in training the algorithm as well as the algorithm’s bias to fixate and only recognize the features it was trained on and not scale the results to other types of features.

Security Breach and Invasion of Privacy: Apple’s New Surveillance Features in iCloud

Another example of unethical behavior in a software system is when algorithms implemented in technology create issues with security and invasion of privacy. One instance of this is Apple’s new security features to flag child predatory activities by letting the parents of children with Apple products know when they send or receive nude pictures in their iMessage or are saved in their iCloud. Even though the intention behind this type of surveillance is for the safety and well-being of minors, the issue of data privacy does come up. The type of features that Apple is implementing implies that their algorithm is going to be constantly monitoring the contents of Apple users and storing them in a database. The idea that a company would have access to the private information of its users as well as constantly surveying the content gives a certain power to a company to be able to collect personal data about its users and even be able to sell the data to other companies. Even if companies don’t decide to sell the data, having security breaches in their databases can allow hackers to gain access to millions of people’s information that is constantly being updated since these Apple features constantly survey a user’s iCloud. Another aspect of this surveillance technology that is worth considering in terms of ethics is the idea that most users do not know that this type of algorithm exists or what data it is collecting, let alone consent to the collection of their information. When considering whether to emphasize whether this is the algorithm’s issue or the engineer’s bias, this is an interesting case. This is because the intention behind the algorithm is for the safety of the users and the algorithm is only trying to identify illegal content, but the repercussions of the algorithm collecting this data and the fact that much of today’s public does not know this data is being collected bring it to the cusp of unethical behavior. 

Copyright Issues in Machine Learning Models: DALL-E

The last example of unethical behavior in a software system is using deep learning models for image generation, one example of this is DALL-E. The way that DALL-E works are that it trains on images and learns how certain images associate with certain words. Using deep learning, the model is able to create a neural network where a user can input any text and DALL-E can create an image based on it. The ethical consideration of new machine learning technology like this comes when we consider data acquisition. Recently, DALL-E has been using Shutterstock images which are watermarked to do its learning and generating. The issue with this is when DALL-E takes these watermarked and owned images to generate new images copyright and ownership of art are completely violated. Ethical treatment of artists’ and writers’ intellectual property is important. Unfortunately, many artists believe that the use of their images or artwork was used without their consent and the works produced by deep learning models are too similar to those already available on their website, thus becoming a direct competitor on the backs of IP theft. When we talk about whether this is the fault of the engineers that created this model or if this is the algorithm’s issue it’s hard to say. This is because we could fault the engineers which give the model the training data that is copyrighted, but we could also say that the algorithm is learning at a rate that creates art at a certain level of precision which could be mistaken as copying other artists’ styles. Although, the question is up in the air whether deep learning models actually learn to recreate new images as opposed to just reproducing images they are trained on. If the latter is the case, there may not be an issue, but if DALL-E is considered to recreate new images then the ethics of stealing artwork stand.