ISTANBUL: Artificial intelligence (AI) algorithms and AI-powered tools have become more than widespread in today’s world, but these solutions are not without compromises, as AI responses to specific questions and topics can be manipulated by their developers, allegedly introducing biases, which renders the use of AI a potential risk, especially in politics.
For instance, the Meta AI chat bot, developed by the Facebook-maker Meta Platforms, did not respond to questions regarding the assassination attempt on former US President Donald Trump, the Republican candidate for the presidential race, while it provided detailed information on the electoral campaign of Vice President Kamala Harris, the presumptive Democratic nominee, when requested, allegedly pointing to a bias in the algorithm.
Additionally, content on Facebook and Instagram displaying Hamas political bureau chief Ismail Haniyeh, who was assassinated in Iran, gets flagged by AI for removal due to violating the rules of Meta social media platforms, all
egedly revealing a sort of manipulation in the algorithm that governs the moderation on these platforms.
‘While today AI doesn’t recommend Trump, tomorrow it may be someone else’
Sercan Okur, managing director of Istanbul-based defense firm Pavo Group’s cybersecurity subsidiary, told Anadolu that profiling experts work alongside engineers when developing AI algorithms to better direct what the algorithm is allowed to produce.
‘While today, AI doesn’t recommend Trump, tomorrow it may be someone else,’ said Okur.
Okur stated that developing AI is similar to ‘raising children,’ as the algorithm’s metaphorical upbringing can influence its ethics.
Closed-source AI tools are prone to manipulation, while open-source solutions offer transparency
Okur highlighted that open-source AI tools offer much-needed transparency, as individual contributors partaking in an open-source AI project can point out any manipulations done to the publicly available code.
However, he noted that closed-source AI tools and algorithm
s are prone to manipulation, and these proprietary solutions can be employed for all sorts of political and financial purposes.
‘While there is a legislation on the ethical use of AI in the European Union, the work is still underway to implement such a bill in the US, and currently, AI can be easily manipulated because of a lack of practice in independent auditing,’ said Okur.
Emotional analysis at work in algorithm
Okur highlighted that the alleged automated removal of social media posts in support of Palestine among ‘billions of posts’ including the word ‘Palestine’ points to a sort of emotional analysis at work in the algorithm.
‘The developers of the AI at play here employed an emotion scoring algorithm to categorize content to deduce whether a post is in support of Palestine, or if it is criticizing Palestine, and since AI scans all content posted, the organization that develops the algorithm can automatically remove positive posts about Palestine,’ said Okur.
He noted that the use of AI algorithms
and tools depends on what the user or the organization wishes to do, as these solutions can be effective in removing inappropriate and fraudulent content online.
Source : Anadolu Agency