Ethics 1:

Deep-fake copyright claims

Upon the creation of deep fake technology, there have been many ways in which it could be used in positive contexts, like expressionism, business usage, accessibility, and as an artistic medium. However in a lot of cases, they are being used in malicious ways, with the intent of slandering individuals, businesses, and the broader perspective of society as a whole, increasing the already growing distrust in media as it already is. This creates the concept of ‘factual relativism’, where “deep fakes can enable the least democratic and authoritarian leaders to thrive as they can leverage the ‘liar’s dividend’, where any inconvenient truth is quickly discounted as ‘fake news’”.

Several ethical concerns arise upon the creation of deep fakes. First is the consent of having someone’s face used in a deep fake, despite the media they might be inserted in. Another ethical issue is synthetic resurrection, where individuals bear the right to control the commercial use of their own face in a deep fake. In some states, this right even extends beyond the afterlife as well. There isn’t much existing legislation pertaining to the usage of deep fake technology, and because of this, the scene currently on the creation and use of deep fakes remains very turbulent.

In our opinion, this is not the fault of the software itself, since there are many things that it could be used for that are ethical, such as turning stunt actors into their Hollywood star counterparts. While the original software engineers may not have had bad intentions, we believe that the individuals who apply the AI in unethical ways, such as putting a Twitch streamer’s face into an adult video, are at fault.

Sharing customer data with smart home devices

Amazon Echo privacy was put to the test one day in 2017 when in Bentonville, Arkansas, just where local law enforcement had given a warrant to Amazon to relinquish Amazon Echo recordings as evidence of a hot tub murder case. 

Amazon Echo is one of the newer listening voice assistant technologies that work by ‘always listening’ in the background to the current room in order to be activated through a pre-set wake word (like their name). Despite how intimidating the concept of ‘always listening’ devices may appear to be, their recording utilities are not activated until after the wake word is heard. Then any transcript or voice recordings are then taken and stored on Amazon’s online servers. This is done in the same manner as AI developing, recognizing the user’s speech patterns for a better grasp of the user’s voice, by teaching the device with a large enough data sample. Users also have access to look at and erase any recordings from their Echo as well.

Concerning Amazon’s existing policy about the privacy of its customers, it is written that “it will provide information where appropriate to comply with the law,” yet with this current case, they released a public announcement that they do not “release customer data without a valid and binding legal demand properly served on us”. 

Despite the stipulations outlined in Amazon’s privacy policy, it still remains very vague as to what point Amazon will be willing to protect customer data. The police were able to admit the warrant and request access to said recordings, but despite the issuing of the legal document, they remained steadfast in not disclosing any data beyond account information. This story brings to light contentions on privacy policies on voice assistants not only for Amazon Alexa, but also for other variations of technology of this kind like Siri, Bixby, Cortana, and others of this likeness. Where can the line be drawn to giving away customer data? Had this been a supreme court case, would the company give the government easier access to the plaintiff’s data? If lines of revenue were threatened, would the company have submitted the warrants requests? Why do the Amazon servers hold on to customer recording data as long as they do, if it is only to teach the machine to recognize their users? Do they use the recordings as a way to find targeted ads from user to user, as speculated in recent media? While the software engineers did program the AI software to record and keep recording, with the intent of bettering the existing tech with the recordings, it is inherently up to the company what they do with said data, and they can act in whomever’s interest that best serves them, whether that is the government, the customer, or a third party buyer. And that is where the ethical dilemma arises. While many questions remain in the air, this leaves many customers and consumers alike wary and skeptical about purchasing a smart home of any kind.

AI Art copyright claims:

Artificial Intelligence is typically created by teaching machines existing patterns through exposure to extremely large sets of data. Similarly to how you would expose an online AI bot to large sets of questions and answers, AI Art is created through the exposure of all kinds of existing art pieces and creative styles for the bot to learn and take after. Because of this, these bots are taking art styles and images from artists, creating art faster, without permission, much less attribution to the artists they have learned from, which brings to question the ethics concerning the creation of AI art.

Since they take the art from artists online and use it to create new art or to impersonate a style. Along with Digital art bots using artist drawings without consent, people also worry that these bots will make the artist’s life harder since having a nice art piece generated from your prompt is just one click away, instead of having to commission an artist to draw it for you. Not only does this cross into copyright territory, but it can also prevent the artist from making a profit from their art. Outside of digital art, AI can also take the voices of artists and use their voices to make a song or sing a song that they have never sung before. With new artificial intelligence bots being created, the ethics become more questionable as to what should and shouldn’t be allowed. We think that this type of ethical dilemma is more difficult to pin on the software engineer or the software since the problem is basically copyright for a person’s intellectual and artistic content. Even Youtube struggles to decide what should count as stealing someone else’s content vs creating new content from someone else’s, with the rise of reaction videos and let’s plays.