Lab for AI, Metacracy & Plural Futures
However, many groups—referred to as protected characteristics in Meta’s terminology—such as Indigenous peoples, racialized communities, neurodivergent individuals, migrants, and others, hold radically different ways of conceiving and perceiving themselves, others, and the world. These worldviews are not merely underrepresented; they are structurally excluded. As a result, AI systems risk reproducing the very injustices they officially claim to protect marginalized individuals and groups from—only faster and at a much larger scale.
This tension becomes particularly visible in contexts where AI systems are consulted for guidance on deeply situated domains such as family life, intimate relationships, spirituality, mental health, or communal responsibility—domains that are profoundly shaped by cultural, epistemic, and ontological assumptions. When users whose worldviews emerge from non-Western or marginalized paradigms engage with such systems, the outputs are generated through normative frameworks that may be culturally incommensurable with their lived realities. In such encounters, algorithmic responses can become not only misaligned or inadequate, but also subtly harmful through misrecognition, reduction, or epistemic erasure of difference.
What happens when guidance produced within a dominant epistemic framework is imposed upon forms of life it was never designed to recognize? Who is responsible then—the developer, the interface, the dataset, the theoretical foundations of AI, or the individual on the receiving end, left alone to navigate a technology that renders their worldview unintelligible?
Such questions are not merely theoretical—they shape the lives of real people. In this project, we confront these and similar questions.
We welcome collaborators from all walks of life—technologists, artists, activists, researchers—
especially those from communities historically excluded from technological design.
Together, we seek to:
- Re-center AI development around justice, responsibility, and plural perspectives
- Build participatory frameworks for theoretical foundations of AI governance
- Amplify the agency of marginalized voices in shaping digital futures