Expert group discussed regulatory approaches to monitoring online content on 9 April.
Joanna Bryson offered her insights on AI and machine-learning technologies as part of an expert group assembled to inform developing policy guidance on artificial intelligence (AI) and freedom of speech, in a workshop for the Organization for Security and Co-operation in Europe (OSCE) on 9 April.
Online platforms like social media use automated technologies such as AI and algorithms to moderate and curate content – helping decide which content is removed or to whom it is distributed. The question for policymakers is how these activities are impacting fundamental freedoms like freedom of speech.
Joining experts from around the world, Bryson contributed to the workshop on regulatory approaches to monitoring illegal content such as terrorist or pornographic materials that could pose serious security and societal concerns. It was organized by the OSCE‘s Representative on Freedom of the Media (RFOM) and their partner, Access Now, and is part of a wider ongoing project considering the impact of artificial intelligence on freedom of expression (SAIFE).
"It’s an amazing part of the COVID era to see how rapidly we can make progress on these kinds of efforts, and with such diverse voices," said Bryson. "Communicating both the limits and the potential of technology in domains such as fairness, inclusion, and accountability is an incredibly important role for academics."