Editor's Review

Media regulators have been compelled to abandon a policing approach to media oversight in favor of broader, stakeholder-driven codes. 

By Victor Bwire 

The blurred line between traditional journalistic practice and emerging trends, including independent content production and user-generated content, is now part and parcel of the modern media landscape. This shift is at the heart of Kenya’s new code of media practice. The code seeks to balance traditional ethical standards with the content moderation rules commonly known as “community standards” on digital platforms, expanding regulation to cover all players involved in the professional collection and dissemination of news.

Media regulators have been compelled to abandon a policing approach to media oversight in favor of broader, stakeholder-driven codes. These new structures aim to promote responsible media content production and acknowledge the increasing use of technologies such as artificial intelligence (AI). This marks a shift from the earlier, narrower focus that primarily targeted accredited journalists.

The Media Council of Kenya's new code of ethics for media practice was developed with these evolving dynamics in mind. It embraces a participatory approach, including public input in its formulation, and addresses the complex realities of today’s media environment comprehensively.

For years, stakeholders have raised concerns not only about the quality of media content but also about declining professionalism. The traditional gatekeeping role of professional journalists and editors is becoming harder to maintain, particularly in an era where AI is used in story generation and audience-led programming is integrated through user-generated content.

Traditional codes were limited in their ability to regulate emerging media actors. Meanwhile, freedom of expression advocates have consistently cautioned against introducing any code provisions that could restrict freedom of speech or media freedom. Yet one constant remains: in the digital era, truth demands a new regulatory approach.

There is a growing need for a converged regulatory framework that aligns with the reality of an expanded space for freedom of expression and access to information. Still, some issues remain outside the scope of global standards, particularly the regulation of content via platform-generated community rules. The most pressing of these is the threat of harmful content, including hate speech, disinformation, and misinformation, all increasingly enabled by AI and user-generated content.

Importantly, regulating content is not about curbing innovation and technology. Rather, the goal is to ensure that the information shared complies with both international and national laws, minimizing legal conflicts. This should not be seen as an attempt to restrict freedom of expression. In fact, in Kenya, as in many other countries, freedom of expression is subject to general limitations. (See Article 33 (2) of the Constitution on the boundaries of this freedom.)

File Image of Victor Bwire.

In recognition of the growing importance of professional obligations, many Kenyan media organizations have developed social media and blogging policies for their journalists. These policies aim to guide media workers in balancing their private and public roles online, ensuring that professionalism is maintained even in personal spaces.

The new Media Council code includes a dedicated section on the Standards for Use of Artificial Intelligence (AI) and User-Generated Content (UGC) Moderation. This section provides ethical guidelines for the responsible use of AI in news gathering and content creation, calling for transparency and safeguards against bias. It also outlines standards for moderating user-generated content, making media outlets accountable for the content they host and helping to curb the spread of illegal or harmful material.

Key provisions in this section include:

  • Ensuring the use of AI and other technologies is fair, balanced, and does not perpetuate harmful stereotypes, compromise accuracy, or infringe on intellectual property rights.
  • Disclosing to audiences when AI has been used in content creation.
  • Ensuring AI-generated content is subject to human editorial oversight and sign-off.
  • Establishing safeguards in automated decision-making to prevent bias.

Additionally, the code states that:

  • Journalists and media practitioners are ethically responsible for third-party content published on their platforms.
  • Media outlets must clearly distinguish between editorial content and user-generated forums.
  • They must monitor their platforms, including social media, and take measures to prevent or remove unlawful content or content that violates human dignity, privacy, or promotes hate speech.
  • They should adopt and publicize third-party content policies, including user rules for contributing content, and employ tools to detect and disable harmful material.
  • Outlets should also support media and digital literacy initiatives, helping the public understand that user-generated content does not necessarily reflect the media organization's views.

In high-risk reporting scenarios, such as crimes in progress, hostage situations, or hijackings, the code requires media to take special care not to endanger lives or obstruct law enforcement efforts. It also addresses the need for delayed broadcasts in situations where live reporting might breach ethical standards. 

Mr Victor Bwire is the Head of Media Development and Strategy at the Media Council of Kenya.