I’ve noticed some dominant narratives in public discourse around doing-what’s-right-with-AI. I’ll give them some oversimplified names; this is a complex topic.
- Ethics: getting organizations to deploy AI in ways that avoid harm to individual people. Example: avoiding discrimination in lending/policing/sentencing. Fairness, accountability, and transparency broadly fall under this heading, though those considerations have broader impact too (like making trustworthy systems). This is most common in academic settings, such as the FAccT conference. Solutions are often sociotechnical, e.g., get a broader range of stakeholders involved.
- Safety: avoiding risks to society and humanity. OpenAI talks a lot about this, sometimes using the term “alignment”. Example: preventing disinformation campaigns, keeping language models from generating racist comments. Solutions are often technical, e.g., tweaking model behavior based on human feedback.
- Data/AI For Good: developing technology that addresses Problems That Matter, which often means serving those who are hurt, vulnerable, or oppressed. Example: AI for medicine, climate change mitigation, ecological restoration, agriculture, etc.
- Wise engagement: making individual choices about how to engage with AI systems.
I’d like to suggest a broader perspective that includes all of the above. It’s the biblical word shalom, sometimes translated peace, wholeness, or flourishing. It involves the absence of conflict and harm, as the translation “peace” suggests, but goes beyond that to suggest a comprehensive vision of things being right. Shalom includes right relationships, including justice. It also implies a wide-eyed realism about the fallen state of the world, both of human hearts (our inclination to elevate self and harm others) and our technology. So the vision of shalom is unlike either techno-optimism or techno-pessimism.
The Bible doesn’t define shalom very explicitly but instead usually gives examples. That invites us to consider examples of what shalom might look like as we develop a society that includes computational intelligence-augmentation technologies, while also expecting that people will come to different conclusions about the specifics. Here’s a few things that I’ve thought shalom might mean for us. Note that this incudes both threats and opportunities.
- Healthy view of self
- ML systems nudge us to view ourselves in terms of only our skills–and then to devalue those skills. more
- We are not machines. We have potential and responsibilities both to create and to empathize.
- We have a new identity: “You were ransomed from your futile ways…by Jesus”; “offer our selves as instruments of righteousness”
- Tools are framed in terms of efficiency; leading us to an “efficiency” / “productivity” mindset: we view ourselves in terms of what we can produce.
- instead: avoid maximizing engagement: let people use words to describe what they want to exist or to become.
- ML systems nudge us to view ourselves in terms of only our skills–and then to devalue those skills. more
- Healthy relationships with other people. Honor, love, serve.
- Embrace of diversity of thought and expression
- We believe that a diversity of people, cultures, expressions, views, etc. reflects God’s glory less incompletely than any individual.
- Threat: LMs embodying narrow norms of language, encouraging a smooth sameness
- Humans communicating truth to other humans.
- We believe that there is truth grounded in reality.
- So we need to counter threats:
- disinformation and propaganda, which will be easier to do at scale.
- sophisticated scams, which will be easier to carry out, so our neighbors are vulnerable.
- Don’t spit generated words in other people’s faces.
- gratitude towards other humans (see gratitude post)
- truth of outcomes (e.g., not authoritatively spouting falsehoods)
- caring for those who work with what we make. Opportunities:
- better writing as systems help people express themselves more clearly
- better reading, as systems help people process more perspectives more deeply.
- LM opportunities:
- Democratized access to conventional wisdom
- Helping us reflect on how much of what we expect of people is actually mimicry, and how much it should be
- Embrace of diversity of thought and expression
- Healthy relationships with technical systems
- People, not systems, in control.
- No hidden nudges pushing people to express certain viewpoints
- Healthy relationships with the created world
- beauty: celebrating, not cheapening
- caring for the natural world around us
- conscious of energy use (ML requires a lot)