Latin America and the Caribbean are grappling with the rapid spread of disinformation, which undermines democratic processes and social stability. Strengthening regional cooperation through initiatives like eLAC2026 and leveraging AI responsibly can counter this threat – provided that ethical guidelines, robust partnerships and inclusive media literacy programmes are in place.
Latin America and the Caribbean (LAC) are especially vulnerable to disinformation, due to the region’s longstanding social and economic inequalities and educational gaps, together with political polarisation and low trust in institutions fueling the spread of false information elsewhere. By integrating artificial intelligence solutions into existing development frameworks, such as via the region’s new AI Action Plan, eLAC2026, LAC can develop shared standards and technological capacity to mitigate the damage of misleading content. Equally important is to ensure that these AI tools uphold ethical principles and protect democratic values. According to anOECD survey, adults in Brazil and Colombia scored below average in identifying false information among all countries surveyed. In most countries, lower income levels were also consistently associated with a reduced ability to identify misinformation and disinformation. Moreover, deep political divides exacerbate matters. During Brazil’s 2018 elections, for instance, millions of WhatsApp users encountered falsified news stories and misleading political content on a daily basis. Low trust in formal institutions, often correlated with corruption, further pushes people towards unofficial sources, amplifying the damage false narratives can inflict on governance.
Regional initiatives incorporating AI are the key
Regional bodies like the Community of Latin American and Caribbean States (CELAC) and the Economic Commission for Latin America and the Caribbean (ECLAC) have driven progress on digital governance for over a decade. Their latest Action Plan,eLAC2026, puts AI at the centre of regional development. The plan includes use of theLatin American Artificial Intelligence Index (ILIA), a platform that tracks AI investments, policies and laws across the region, backed by development banks, UNESCO, and private-sector giants like Google.
As part of its next phase, ILIA could include indicators for disinformation countermeasures. Similar to how the UN’sCLIMB Database tracks policies on climate-driven human mobility, ILIA could catalogue legislation and initiatives dedicated to combatting AI-driven misinformation. This “one-stop shop” for AI policy would foster cooperation across ministries and civil society, ensuring that guidelines for responsible AI are accessible and easy to implement.
Adopting AI to fight disinformation demands strong ethical frameworks that safeguard human rights. If language models and other AI systems are trained on biased data, or if they lack adequate oversight, they may inadvertently entrench social prejudices. And without transparency, there is a real risk of disproportionate surveillance of citizens. For this reason, best practices such asthe EU AI Act’s transparency requirements, where AI-generated content must be clearly labelled, can serve as a starting point. For example, as outlined in Article 50, users and deployers of such systems are required to disclose the artificial origin of image, audio, video, or text content generated by AI. These regulations are seen as the EU’s endorsement of responsible AI, signaling a shared commitment to balancing innovation with ethical considerations.
Contrary to popular belief, AI development in LAC is not limited to large technology multinationals. In fact, most AI activities emerge from smaller companies and start-ups. By working under the CELAC and ECLAC umbrella, alongside entities like the Development Bank of Latin America, these young enterprises can secure crucial funding, expertise, and opportunities to collaborate regionally. This approach not only unlocks local talent but also promotes AI-driven solutions tailored to specific national realities. Strengthening national AI entrepreneurship in line with eLAC2026 helps address persistent issues, from disinformation to social inequality.
Collaborative and cross-border strategies offer a blueprint
AI alone cannot solve the challenge of disinformation; building a digitally literate citizenry is key. Taiwan’s “humour over rumour” campaigns, supported by theTaiwan FactCheck Center, show how blending government, civil society and creative volunteers can combat viral hoaxes with entertaining, factual counter content. Although LAC has different social and cultural dynamics, adopting such a collaborative model could work. Media literacy should be integrated into public school curricula, community workshops, and online resources, ensuring that citizens across age groups can spot manipulated content. Programmes like Mexico’sVerificado show how partnerships with technology platforms (e.g. Facebook Journalism Project) enable community-driven fact-checking at scale.
Latin America’s diversity in language, culture and political systems calls for cross-border strategies. TheEuropean Digital Media Observatory (EDMO) offers a blueprint for building an international fact-checking network powered by AI. This approach links various national teams under standardised protocols, centralising resources such as AI-driven content analysis tools. Fact-checking organisations like Spain’sMaldita.es have leveraged AI to detect fraudulent WhatsApp messages, while the African Fact-Checking Alliance (AFCA) coordinates efforts across linguistically diverse nations. Adapting these proven, open-source systems for Latin American contexts would enhance readiness for election cycles or crises where disinformation spikes.
Initiatives like eLAC2026 and ILIA drive the development of a cohesive, AI-empowered response to disinformation in Latin America and the Caribbean. At the policy level, introducing clear metrics for disinformation reduction under ILIA would incentivise national governments to adopt robust standards and share their successes regionally. Practical measures – media literacy training, investments in local AI start-ups, and cross-border verification networks – can further strengthen democratic norms.
None of this will succeed, however, without strong ethical safeguards. Ensuring AI systems respect privacy, incorporate oversight, and protect vulnerable communities must remain at the forefront. By uniting around these shared principles, LAC can harness AI to bolster democracy and civic engagement, establishing a global example of how technology, when used responsibly, can fight falsehoods and foster trust.
Photo by Leon Overweel on Unsplash