Different Song; Same Dance – Ethical AI Perspectives from Tanzania, Zanzibar and Africa

AEGIS Workshop at University of Dar Es Salaam

Tanzania, Zanzibar, Africa, and their myriad constituent cultures, have much to teach us about fostering safer, more inclusive technology. With a story that is both ancient and new, these regions are overdue their rightful participation in the emerging world of human-AI partnerships.

In May 2025, the AEGIS team co-hosted two workshops in Tanzania, one in the capital with the University of Dar Es Salaam, and  the other with the Law School of Zanzibar. This was our final research visit of the project before the conclusion of our funded work. Continuing the theme of the AEGIS project, participants from a blend of academia, industry, policy and civil society discussed how the emergence of empathic AI partners should be considered in a regional context.

And how is that?

Preface: framing the cultural context

Well, once again, it’s complicated. Africa is many things. Tanzania alone boasts over 120 tribes, each with its own language. It is within the top 30 most ethnically diverse countries on Earth, alongside 24 other African nations. This inherent diversity must be forefront in any attempt to sketch a profile of a region like Tanzania, or the African continent; each is many things.

Our goal from the outset has been to hear ethical perspectives (on empathy and human-AI partnering) from underrepresented regions, but of course this theme is immediately problematic. Who is more or less underrepresented? What do we mean by region? To what extent can one contiguous land area represent a united group of people anyway? Generalisation, abbreviation and omission are inevitable. While keeping this cautionary preface in mind, engaging with Tanzanian participants, we discover aspects that are unique while also resonating with other regions we have visited.

Let’s try to illuminate some…

The shared cultural identity of Tanzania, and indeed much of sub-Saharan Africa (insofar as any singular traits can be claimed), is typical centred on collectivism, highlighting principles such as:

  • Ubuntu (interconnectedness)

  • Communitarian emphasis

  • Primacy of the family unit

  • Duty

  • Respect

The first region we examined under the AEGIS project was Japan (following earlier research by our extended team) and it’s easy to see a striking similarity in the collectivist nature of that society, but clearly these are two very different places and peoples. The placement of each country on a human-development scale for instance, is an almost exact mirror at either end of the chart. Yet, as we discussed in our Japan whitepaper, each region’s collectivist aspects similarly contrast those of the cultures within which the world’s leading foundational models (GPT, Gemini, etc.) were created (e.g. USA, “the West”) – and thus deserve inclusion in technology planning.

Imagining communitarian AI 

The lived reality of Tanzanian culture was illustrated most vibrantly in an exercise led by Dr. Irene Isibika from Mzumbe University, in which she split us into teams to act out scenarios of imaginary AIs stumbling on local nuances. The picture presented was one in which families and the surrounding immediate community have a big voice in a person’s life, regarding their relationships, career choice and so on. The performances on our little stage starkly animated the cultural blinkers that western-trained language models wear. Misalignments (over issues such as child discipline, intimacy and same-sex relationships) can cause serious miscommunication between AI and users in African contexts.

Teams practising for their unexpected performances

This kind of participatory research & design can be as enlightening as it is fun (if a little scary!).

In communitarian society, a person is likely to define their identity in the context of their roles within their key community groups – as an employee, as a brother – more so than in individualistic societies. In turn, they tend to derive their duties from that context. As intelligent machine systems become ever more integrated with our lives, and develop ever more intimate relationships with us, where do such cultural norms play into the design of those systems, especially considering the western origins of the leading foundational models? Their normal operation is through a single-user interface, collecting data on that individual and gradually tailoring outputs to them. Can such interaction be adapted to a collectivist user-base (e.g. where privacy and human dignity are interpreted differently)? Group interaction rather than individual. How would that be optimised, measured, regulated? What would the user experience (UX) be like? Perhaps, where western models are designed for individual outcomes – whether those are engagement, utility or wellbeing – similar metrics could be translated into communitarian usage: the wellbeing of the surrounding social group; the level of uptake by the community.

Nurturing AI literacy and practice

At this time, AI usage, literacy and expertise are relatively limited in Tanzania. Counsel Kheri Mbiro from Breakthrough Attorneys explained that AI is mostly used in Tanzania for banking and financing, academia, health, agriculture, telecommunication, and legal practice. There is a major lack of technical expertise, and most ordinary people are far from understanding the technology enough to have some agency in its adoption. And this extends right up to government officials. AI literacy and AI readiness may not yet even be quantified in much of Africa (as implied by this: UNESCO supports Consultations for AI Readiness Assessment in Tanzania). Critical awareness for ordinary people could help arm them to mitigate some of AI’s dangers and use it in more rewarding ways (although this is hardly a regional issue). To quote one of the Zanzibar workshop attendees, “Information literacy is supposed to be a human right. It’s high time, AI literacy should be a human right”.

Local data is sparse. Adding fuel to the existing fire of western- (and male-, white-, wealthy-, etc.) biased datasets on which much of modern digital services are built, the quality, volume and accessibility of data on minority and indigenous groups throughout the world is meagre. Tanzania and its many cultural groups are no exception. On top of this, we heard how research and support for science are largely confined to public institutions (e.g. universities) with little additional private or individual research, and funding is relatively low even where it exists.

Regulation, policy, standards, laws – these things too are thin on the ground in Tanzania. As we saw in Indonesia, the region has some history of adopting and adapting pioneering laws, which we could see happening with the likes of the EU AI Act. But caution was voiced about avoiding the Brussels Effect and looking instead to sources such as the African Union (AU) and Islamic tradition, as well as discussions on digital sovereignty in Africa and caution against Chinese investment and modern forms of colonisation.

Consider that mainland Tanzania’s religious mix is primarily Christian, with a significant Muslim minority, as well as many other (e.g. Animist) faiths, while Zanzibar is 99% Muslim. The island of Zanzibar deserves its own stories but suffice to say that, despite its size, it has rich traditions and significant impact on Tanzania and beyond, and claims to be undergoing a digital transformation. And while we can suppose that Christianity has had some influence on the dominant western AI models, Islam could add various instructive ethical concepts, such as Maslaha (relating to greater good / public interest). Of course the same goes for the ethical framings of other cultural, religions and social groups. They all can bring flavour and strength to the current, narrow framing.

Vibrant workshop at the Law School of Zanzibar

Localisation and adaptation

All the above issues point towards a need (and opportunity!) for localisation of the technology to better fit regional cultural characteristics. This challenge raises loud echoes of our recent interactions in Indonesia, as described in our post AI Localisation and Extreme Diversity: Insights from AEGIS Workshop, Jakarta, Jan 2025.

We began to ask what such localisation could look like and how it could be created, but this remains a rich area for further research and development (project, anyone?!). Some prominent insights from our discussions suggested that:

  • Each region or group should consider building a localised (e.g. African, Muslim) “layer” over existing (western) tools such as the foundational models, rather than attempting to build competitive equivalents. For instance: gathering indigenous datasets, maintaining ownership of them within the local community, and managing external access to that data.

  • Jobs could be created for overcoming language & culture issues. We heard that the likes of GPT had trouble even translating into Swahili, which is a major language, while Africa likely has well over 1,000 spoken languages, and Tanzania alone over 100.

Most of the Dar Es Salaam attendees, outside our venue – the Confucius Institute at University of Dar Es Salaam

Participation in the great global game

In conclusion, let’s try to paraphrase multiple participants from our two workshops:

Africa is not a spectator in the global race to define AI. Now is the moment to educate, empower, and build – ensuring that emerging technologies are developed and governed with African languages, values, and lived realities at the centre.

Next
Next

Automating Empathy – A Short Film