Jump to main content
Back to news

AI can also be a tool for inhumane acts

2024-09-30 12:13:00

AI can do amazing things, but it can also be used to violate human rights, spread fake news, manipulate, control weapons, and even cause environmental damage, it was said at one of the most popular programs of Corvinus' Researchers' Night.
Budapesti Corvinus Egyetem

‘Many people know the positive side of AI in healthcare and education, but now I will talk about the negative side, the political and military use’

– said Attila Dabis, academic writing consultant at Corvinus and expert in the field in his educational presentation on Friday.  

According to the researcher, the “bottom” of the use of AI is when it is used to commit serious human rights violations. As an outrageous example, he cited the case of the Uyghurs: the approximately 10-12 million Uyghur Muslim communities in China live in the Xinjiang Uyghur Autonomous Region. Their living conditions resemble those of concentration camp inmates: families are separated and forced to work, especially in the textile industry. The Chinese Communist Party keeps Uyghurs under constant surveillance as Chinovnik police and civilian informers, and according to standard calculations, there is one inspector every 6.5 square meters. Surveillance is simplified with AI systems: an iris scan is made of each person, which can be used to identify a person with much higher accuracy than other methods (such as fingerprints). Every Uyghur who buys a new phone is required to download the Cleannet Bodyguard app to their phone so that they can be monitored by a central task force. If someone does not download this app, they will face jail or so-called re-education camps.

‘In my opinion, this is the bottom of it, this is a violation of fundamental human rights’

– Dabis said. 

 

Deepfakes and the Inglourious Basterds 

Next, the speaker described cases that deceive and manipulate people using deepfake video. With the help of AI, it is now possible to “steal” the face and voice of a celebrity loved by millions and make him or her say and pretend to say things on screen that he or she never said or did. He showed one of Morgan Freeman’s deepfake videos to the otherwise large audience.  

 Dabis mentioned that a few days after the start of the Russia-Ukraine war — when the Russians failed to capture Kyiv in a blitzkrieg —  there was also a deepfake video of Zelenskyy in which the Ukrainian president calls on his compatriots to lay down their arms. Another ‘great’ deepfake video used Quentin Tarantino’s 2009 film Inglourious Basterds for political purposes and manipulation. The film is set in occupied France during World War II, where Jewish-American soldiers (the Basterds) hunt down and execute Nazi soldiers, and one of whom uses a baseball bat to smash the heads of German soldiers. In the depfake video, Putin, not a German soldier, is hit on the head with a baseball bat.  

‘The more this technique develops, the more there will be parallel crumbs of reality’

– the speaker stated. 

The expert then talked about F16 fighter jets that can take off and conduct combat operations without a pilot, also with the help of AI. Lethal Autonomous Weapons Systems (LAWS) can now select targets without direct human control. What does this robot lead to in war? As Dabis added, these machines are also made by humans, but everyone can make mistakes: who is responsible if they don’t work properly and destroy a hospital instead of a military target?  UN Secretary-General António Guterres called automatic weapon systems politically unacceptable and abhorrent. Of course, the rapporteur added, responsibility can always be washed away. 

Dabis also pointed out that the use of AI causes enormous environmental damage and consumes large amounts of energy. As he emphasized, ChatGPT uses up as much energy in half a day as a small town needs. 

 

Hate speech, cyberbullying 

Then the rapporteur mentioned hate speech: as he said, he deals a lot with this topic, as he also works for the Szekler National Council. ‘Hate speech spreads on social media. It’s easy to spread, it’s hard to erase,’ he concluded. He said ChatGPT is designed not to be hateful, but it can also be easily hacked. He pointed out that what can be called hate speech depends on many factors – otherwise, if we restrict it, we also restrict a fundamental human right, freedom of speech. He also mentioned the phenomenon of cyberbullying, the purpose of which is to make the targeted person feel inferior and which may even lead to suicide.

‘As a practicing parent, it’s also a problem for me because I don’t want my child to be digitally illiterate, but I don’t want them to be victims of cyberbullying’

– he added. ‘Grandma scams’ (grandchild is in trouble, you need to transfer money to him immediately) also work better with the help of AI, since now anyone’s voice and personality can be stolen by scammers, we have heard. 

At the end, the speaker drew attention to the least dangerous, but still harmful, unwanted consequences of AI. Many people in the US already have remote-controlled homes, and Amazon Alexa is a virtual assistant that can wake up the inhabitants of the house with the help of AI, turn on the coffee maker, take care of almost everything. It happened that a small child who really wanted a new dollhouse ordered it with Alexa on Amazon and the toy was brought to her she wanted. The case was picked up by the American tabloid press, then spread to many households that also had Alexa. After hearing the TV coverage, these machines also ordered this dollhouse. The case turned into a legal dispute, and Amazon eventually had to pay compensation. It is difficult to eliminate such mistakes, which can lead to legal matters and sometimes millions of dollars as compensation.

Copied to clipboard
X
×