By Victor Bwire
While big tech companies have previously been reluctant to engage with African audiences on basic ethical principles related to content on digital platforms and their impact, it is becoming clear that this region requires more attention.
Spaces such as X and TikTok, along with other digital platforms, have become vital for citizen access to information and engagement. We need more presence from big tech companies on the continent, not to control content but to enhance multiple approaches to regulating the information being exchanged.
Beyond the traditional concerns of harmful content and disinformation, the enactment of data protection, privacy, and copyright laws on the continent necessitates enhanced data literacy activities.
The Reject Finance Bill protests in Kenya on June 24, 2024, largely mobilized through digital platforms demonstrated the level of citizen engagement and the importance of addressing technical challenges and data analytics related to audience trends in addition to the sharing of personal data on the same platforms. These issues highlighted the need for urgent combined digital literacy and self-regulation of platforms.
Read More
The ease with which various media houses have combined traditional and new media channels for citizen mobilization and education during the Kenyan protests is noteworthy. Traditional media often picks up online clips, verifies them, and uses them in news or interviews. Conversely, content creators frequently use sections from traditional media news on digital platforms for amplification.
People now use Telegram to share private, high-quality images, and many broadcast media outlets use YouTube for live broadcasts across the nation and regionally. Media convergence and audience maximization are at their peak, blurring the lines between traditional and new media. Audience-led programming for content production is now a reality.
In the era of technology, truth is suffering the most. Existing regulations and practices seem unable to manage the situation. We need new approaches, especially in media and digital literacy, moving away from the fixation on developing new laws that threaten freedom of expression.
Freedom of expression is often the first casualty in conflicts and crises. Once disinformation reaches what is perceived as a global crisis, panic responses may include extreme laws and actions with adverse effects on freedom of expression. Already, countries in the global north have implemented laws on foreign information manipulation and interference through national security lenses, impacting freedom of expression.
Nations are now calling disinformation or foreign information manipulation and interference (FIMI) a global crisis, similar to climate change, radicalization, and financial crises. The current approach to digital ecosystem regulation is inadequate, and merely citing community rules and removing disinformation or hate speech is no longer sufficient.
Delays by big tech companies in engaging meaningfully with this problem may result in countries enacting laws and administrative codes that hinder freedom of expression and access to information. It is worrying that without deliberate action by big tech companies, the design elements in digital information flow may lead to violations of freedom of expression, as countries frustrated by the evasive responses from platform owners develop cybercrime and disinformation laws.
It is clear that countries see disinformation as a significant threat to democracy, national development, global peace, elections, and health. Governments have made up their minds, if the big techs are not more proactive and resolute on platform regulations, government control or regulation will reign on. Can big tech companies stand up, be counted, and engage with other players, including academics, civil society, regulators, and governments, to develop a system of digital regulations that does not infringe on freedom of expression?
Many freedom of expression advocates have argued against laws like the social media tax, computer and cybercrime laws, and digital media laws, on the grounds that existing regulations are sufficient if media and digital literacy are adequately promoted within communities.
We do not need to criminalize platforms because of the people who misuse them but must ensure they are responsible for making their platforms safe and civil places.
Platform providers cannot remain aloof to the challenges that come with the innovations and must be robust in working with others to ensure responsible use of their platforms. Information shared must be useful to the communities. They must invest in research, public education, and digital/media literacy for the good of the communities in which they serve, they must join coalitions that are working on ensuring responsible, constructive use of information shared on the platforms.
Content regulation is not aimed at regulating innovation and technology, but discussions are agreeing that information shared is within the agreed international and national laws to minimise conflict with the law and should not be seen as adding for regulations on regulating Freedom of Expression.
Copyright laws and awareness of AI developments will be critical to our communities as we move forward. It is not about the complexities of how AI works and the science behind it but understanding how it can be misused, such as providing misleading information during elections that undermines democratic processes in the country.
Mr Victor Bwire is the Head of Media Development and Strategy at the Media Council of Kenya