Ethics1

First ethics assignment.

Concerns in AI image generation

With the latest advancements in AI, it has become possible to generate content of almost anything and anyone even if its not real. AI image generation can take a simple prompt and even photo and then take the key points from them to generate thousands of of other images. With this it becomes hard to control what content AI can produce which has actually been a recent issue with google’s gemini AI. Recently google announced gemini 1.5, and after some testing from the public many were unhappy with its portrayel of people. After some users asked for images relating to historical portrayels, gemini ended up producing innaccuracies and misrepresenting events and figures in its depiction of gender and ethnicity. One example is when given the prompt to generate images of couple in 1920 germany its generation included images of an indiginous couple. Another is that when asked generate images of the American Founding Fathers, it included multi-racial images that weren’t accurate. Part of this issue stems from the fact that AI creates these images based off of what is fed into it. While these can be considered lighter examples of issues with image generation that were caused on accident, it becomes harder when a person attempts to create offensive images on purpose. Deepfakes are often a result of someone purposely trying to create material to misrepresent someone or something and are really only limited by the imagination. While some may try to ignore the false images as technology advances it just becomes increasingly harder to know what information and images are real and what are not. Cancel culture has become comin occurance where a person loses their job, wealth, and status all because of peopling gathering to defame and expose a person. A person can easily be put in this sutuation because of false information spreading like wild fire. Once the damage has been done it can become hard to recover socially. So with all those risks it becomes a hard question on how to control this technology.

https://bnnbreaking.com/tech/googles-ai-misstep-a-lesson-in-accuracy-and-ethics-amidst-tech-evolution

https://readwrite.com/google-pauses-geminis-ai-image-generation-features-to-fix-historical-inaccuracies/

https://www.forbes.com/sites/anafaguy/2024/02/26/widow-of-billionaire-david-gottesman-donates-1-billion-for-free-medical-school-tuition/?sh=7cbc91fe2bd6

https://www.msn.com/en-us/news/us/nuns-fall-victim-to-high-tech-ai-scam-impersonating-bishops/ar-BB1iPx99

Software “fixes” for hardware problems.

     One practice in the aviation industry is that if a new airplane is essentially the same as a previous model, it does not need to go through the very time consuming and expensive process of certification.  This is generally valid since it is essentially “the same airplane” as the older model. Pilots have to be certified to fly each certified aircraft type separately. In 2011 Boeing announced that they would be producing an 4th generation of the tried and true 737 line of aircraft with more advanced flight controls and more efficient engines.  Since the 737 was a line of aircraft that dated back to 1967 with a long track record and already in service all over the world, all the pilots who are already certified to fly a 737 would also be able to fly the new 737 MAX without new training or re-certification.  This is a huge selling point for Boeing.  It turns out that to get the new, larger, high efficiency engines to fit between the wings and the ground on taxi, they had to move them significantly forward, and this changed several flight characteristics, including a “dynamic instability” when you apply more power, it increased the angle of attack, and could make the plane more likely to stall and fall out of the sky. Since Boeing had a high incentive to keep the impression that the 4th generation of the 737 is essentially “the same plane”, they decided to compensate for this by fitting the flight control system with software called the “Maneuvering Characteristics Augmentation System” (MCAS), which would automatically compensate for this by pushing the plane’s nose down to prevent the stall if it detected that the angle of attack is too high.  This way the behavior and response of the aircraft remained consistent with the previous generations and the plane can be viewed as “the same airplane” as the previous generations.  We don’t need to go through all the time and expense of redesigning or recertifying, we can just fix it with software. 

     There is an old saying that goes something like this: “Just because we can do something, doesn’t mean that we should.”  There are two flight control systems and two sets of instruments that monitor air speed, angle of attack, and various other flight parameters on the 737 MAX.  One is active, and the other is in standby.  The decision to use MCAS to compensate for the new flight characteristics, and ultimately remove all mention of the system from the flight manuals by the time the first planes went into service in 2016, created a situation where 737 pilots were not aware that the system even existed, let alone that it was ALWAYS monitoring flight information and altering the behavior of the controls even when the pilots thought they were manually flying the plane.  It turns out that the active flight controller got all its angle of attack information from a single sensor which is known in the industry to be prone to frequent failure.  Even though there is a second sensor on the plane, it is connected to the standby system and not cross-checked for accuracy. When MCAS detects that the plane is approaching stall conditions, it pushes the plane’s nose down until it no longer detects danger.  It even does this when the autopilot it disengaged, and it overpowers the pilots attempt to pull the plane’s nose up, no matter how hard they try.

     Two years after the first flight of the 737 MAX, Lion Air flight 610 developed a fault with its angle of attack sensor 13 minutes into its flight, and the plane kept forcing the nose down despite the crews repeated attempts to pull up, and crashed into the Java sea killing all aboard.  5 months later, Ethiopian Airlines flight 302  experienced MCAS pushing the nose down a few minutes after takeoff, and in their attempts to stop the electric trim tabs from bringing the nose down, the pilot pulled the circuit breakers to the control motors.  They were unable to regain control of the plane and crashed into the ground in Ethiopia.  These two crashes killed a total of 346 people and led to a world-wide grounding of these aircraft for 20 months.

     The programmers who wrote the software for the MCAS system did not follow any of the common protocols that pilots are trained in to cross-check instruments to verify their readings, and wrote software that would seize control of the aircraft and not allow the pilot to regain control of the aircraft.  There appeared to be a lack of understanding around the issues of common aviation practice and safety principles behind the design of the system.

    The obvious ethical failure of hiding the differences between the 4th generation 737 and its predecessors and the decision to fix it with software was compounded by the further ethical failure of writing software that would take control of a vehicle and not allow the operator to perform the maneuvers needed to save the lives of all aboard.  The further ethical failure of having the software designed to naively rely on inadequate information for such a vital control system is appalling.

    Boeing has had to pay about $20 Billion in fines, compensations, and legal fees as a result of these crashes.  The bad publicity and world-wide grounding of their aircraft led to 1,200 orders for new planes being cancelled and a $60 Billion loss of revenue.  I don’t think that certifying the new design for the 737 MAX would have cost Boeing anywhere near that much. 

https://spectrum.ieee.org/how-the-boeing-737-max-disaster-looks-to-a-software-developer

Concerns of Bias Within AI Algorithms

A recent example of unethical use of AI is Amazon’s gender-biased recruiting algorithm. It was found that the recruiting algorithm had preferred prospective male employees over their female counterparts. Originally, Amazon had intended to use an experimental AI recruiting tool to recruit potential candidates from all over the web (LinkedIn, etc. ). The algorithm had rated these prospects on a scale of one to five stars, solely from their resumes/CV’s. After persistent use and traveling of the web, the AI model had learned to systematically demote women’s resumes for certain jobs, especially in tech like software developers. Amazon immediately shut down the use of their experimental tool after failing to make the algorithm gender-neutral. 

Although Amazon’s AI technology is relatively well known for its successes, AI bias is real and can happen from a variety of sources. AI algorithms can unintentionally learn bias from a number of places such as the data it was trained on, the people who developed it, or even the people using it. In Amazon’s case, they had trained their algorithm on past CV’s submitted from previous candidates within 10 years. Those past CV’s had a low proportion of women working within Amazon so the algorithm had analyzed this trait within the data and suspected it was a factor of success within the hiring process. 

This is unethical because it is narrowing down the ability for anyone to find equal-opportunity employment. With gender or racial bias present in tech company’s hiring processes, it doesn’t provide a fair or just chance to a group of minorities as equally to other groups. With continuing use of AI algorithms, we have to make sure that there is fair use and training of such products in order to provide an equal opportunity for all. 

https://www.imd.org/research-knowledge/digital/articles/amazons-sexist-hiring-algorithm-could-still-be-better-than-a-human/