Social media accountability in the time of misinformation, disinformation and hate content: A conversation with Prof. Ishtiaque Ahmed

Bangladesh’s online environment is riddled with different sorts of misinformation, disinformation and hate content that target gender, religious, ethnic, cultural, economic identities of people. It is evident that popular social media channels remain the major sources of such content that promotes hatred or violence. In the past, we sat down with third party fact checking organization, legal experts, victims to understand the dynamics and impact of such content in social media. In this episode, we had an intriguing conversation with Ishtiaque Ahmed,  an Assistant Professor of Computer Science department at the University of Toronto. Ishtiaque has done extensive research on social media accountability, biases in AI algorithms used by different social media channels, data sovereignty, data democratization, ethical practices in machine learning etc.

Ishtiaque talked about the elements that influence algorithmic decision making keeping in mind data and input biases – who is doing it, why and how? He thinks reviewers have their own limits too. They can’t do the fact checking on the ground. On the other hand, there are cultural issues, which makes verification a bit difficult. Historically, human civilizations are built on many things that they didn’t want to verify. So what to do with those? He thinks, if a piece of disinformation is a very localized one, that needs to be addressed through localized mechanism. To him, disinformation is not essentially about something to be factually true or false but to an extent whether this is harmful or not for others. If it is harmless, then nobody cares.

Ishtiaque argues that Artificial Intelligence (AI) builds on modern day history where the voices of the marginalized are in absence. Platforms on the other hand are controlled by politically powerful and education elites. Hence AI essentially works to amplify human biases. But on the other hand, millions of content are being generated every single minute. It is not humanly possible to verify everything without the support of AI driven machine learning process. He explained, social media platform has decentralized the media structure but the ‘attention economy’ of these platforms is attracting like-minded users together and is algorithmically amplifying content that they wanted to see. A way forward could be data sovereignty or universal data rights, as there is no comparable structure such as, the Universal Declaration of Human Rights to ensure everyone’s equal participation.

While talking about awareness and capacity building, Ishtiaque gave reference to Amartya Sen’s ‘Capability Approach‘, which lays down that not having access to resources affects one’s capability to enjoy rights and development. Therefore, one side is instrumental where technical knowledge, what to do, how to do it but the other side is creating a society where someone is not marginalized anymore and enjoys equal rights and opportunities. With the current framework of social media, the burden of learning is always on the people who are at the bottom end. The amount of things that they have to learn is itself a barrier. Therefore, social media should be localized and its content approach should be pluriversal i.e. instead of choosing one, accommodate all of them. Unless it is harming anyone, there is no need to cancel one out. Please listen to the conversation below.



Link and share