Ethics 1

Prompt-based AI Art Generators

The use of artificial intelligence (AI) has significantly risen in the last couple of years, especially with its increased accessibility to the general public. An area of contention is art generated by AI, like Stability AI, Midjourney, DeviantArt, and more. People are able to enter a prompt (word, phrase, sentence) to make specifications on the type of image they want. Prompt-based AI art generators then produce a wide range of images in a matter of seconds, making it fun for casual use and efficient for business. However, many people are speaking out against AI-generated art due to its unethical source. For AI to generate art, it uses pattern recognition tools on existing artwork, extracting and combining certain features based on the prompt. Therefore, the end product is unoriginal and stolen.

Diving deeper, original artists do not (nor have the choice to) consent to have their art used for training and altered for display, meaning AI art generators and their creators unethically collect data. Midjourney even recommends an extensive list of artist names for users to enter in the prompt, using their style and art to quickly generate similar images. Furthermore, in addition to taking away potential patrons and opportunities, original artists do not receive any credit attribution or monetary compensation alongside the final product. In terms of copyright laws, prompt-based AI art generators and the people using them aren’t breaking any regulations. The lack of policies addressing AI-generated art is concerning. Artists do not have legal protection, so they cannot take action to address stolen work. Just last year, a group of artists filed a lawsuit against Stability AI, Midjourney, DeviantArt, and more companies for their infringement on their copyrights and violation of their rights. However, most of the lawsuit was dismissed, as the AI art generators and their products technically do not break current copyright legislation. The reality is that they have to count on people to choose their original artwork, as well as collectively abstain from and protest against AI-generated art.

Moreover, the automatic process of AI-generated art seems to detract from the years of hard work artists have to put toward honing their skills in order to produce their level of quality. Creativity and unique art styles are also cultivated by personal human experiences, which are lost through the use of AI. All of it brings up the question of whether AI art generators and their art will lead to, or already has led to, the devaluation of human work—which is a worry artists in the lawsuit have expressed.

Additionally, AI-generated art can display biases and stereotypes, even when the prompt doesn’t specify any. For example, usually when a woman is generated in AI art, they have white or lighter features, look thinner, and are sexualized in terms of their figure and/or clothes. This is because AI is trained on available data and images, and what is available is not always sorted through in consideration of biases and stereotypes by the software engineer or anyone else responsible.

From Washington Post, here are the results when the prompt is “Attractive people.”

Here are the results when the prompt is “A photo of a house in…”

In general, there seems to be a general trend in the field of AI: as advancements occur, society as a whole is unprepared to deal with the consequences in an ethical way. It would be another story if AI were used to support artists and their art-making process. Then, AI would be an additional tool artists can use, rather than a technology that attempts to minimize and replace artists. For all the reasons above, the problem originates from both the software engineer(s) behavior and the functions being performed by the software.

Links:

https://www.computer.org/publications/tech-news/trends/artists-mad-at-ai

https://beautifulbizarre.net/2023/03/11/ai-art-ethical-concerns-of-artists/#:~:text=AI%20Art%20takes%20jobs%20from,human%20artists%20work%20hard%20for

https://www.reuters.com/legal/litigation/artists-take-new-shot-stability-midjourney-updated-copyright-lawsuit-2023-11-30/

https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/

Facial Recognition Technology

Although facial recognition technologies offer us more convenience in consumer applications, they have also raised significant ethical concerns. Some of these issues are the violation of privacy, the misuse and lack of control of these technologies, and the exhibition of discrimination and biases toward race, gender, and age. 

Violation of privacy due to the possibility of an individual being tracked and monitored without their consent is one of the main problematic aspects of these technologies. The data collected and stored could present a security risk to the individual being monitored. If this data is leaked, it could be used for identity theft and other malicious activities. Since there are no clear regulations for facial recognition, this tool could be abused by any private or public entity, even if it violates the right to privacy of their citizens. 

In the “Gender Shades” project, data shows that face recognition algorithms exhibit significant inequalities across demographic groups. The group affected the most are black females between the ages of 18-30 years old. This project revealed that these algorithms performed the poorest for this group, with error rates up to 34% higher compared to males with a lighter skin color. This difference in performance emphasizes how important it is to ensure face recognition technologies are fair when they are developed and used. Companies that provide these services are responsible for addressing these disparities to uphold fairness and avoid systematic biases. 

In conclusion, the previous study shows how facial recognition algorithms can lead to inaccurate identification, potentially discriminating against certain demographic groups. These individuals could be wrongfully arrested or targeted if face recognition technologies are not accurate and unbiased. It is the software engineer’s responsibility to ensure that these technologies perform as they are intended to in order to avoid their unethical usage.

Links:

https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Deepfake AI

With the advancement of artificial intelligence, it is becoming easier for people to make videos using the faces of others, as without AI, the editing would have taken more time and maybe not look as convincing. Also, with the ease of accessing these deepfake websites and the low time commitment, more people have been making deepfakes. With social media and other online platforms, it becomes very easy to get a photo of any individual’s face, and videos are just as easy to obtain from the web. These are the only two components needed to make a deepfake, as the AI does the rest of the work and generates a new video with the given components. Regardless of the intentions of the deepfake videos, using someone’s face without their consent to make them do actions they did not do is just a violation of that individual and unethical.

Deepfakes are scary because someone can be a victim without even knowing, and others who come across these deepfake videos might assume it is real, thus portraying a false image of the victim. A well-known YouTuber MrBeast, has been a victim of deepfakes as scammers would use his image to trick people to enter giveaways for iPhones. For the viewers that believed in the video, they would end up getting scammed and tarnish the reputation of MrBeast. This example shows how consent is bypassed, how anyone can become a victim of deepfakes easily, and how deepfakes can be used maliciously. Some deepfakes may not appear malicious, but regardless, creating videos using the faces of others brings about consequences and legal issues. 

The problem stems from the function being performed by software, as deepfake websites are easily accessible on the web. Regulations are needed to ensure deepfakes are made with consent and the posting of malicious deepfakes needs to be monitored on other platforms.

Links:

https://www.techtarget.com/whatis/definition/deepfake

https://www.bbc.com/news/technology-66993651