The summer heat was torching in the camp, but we wouldn’t have exchanged it for the comfort of our parents’ air-conditioned apartments. I was 18, between high school and college, still living with my parents. They forcefully but vainly insisted I get a summer job and earn some money like “normal” kids.
Fast forward to my late 30s when my ex-guru asked me to get a 9–5 job, save a couple of thousand dollars, give them to him, then I can move into his ashram and get accommodation in one of its A-frame cabins. Many of his disciples did so, but for me, this request turned him into an “ex.”
I love the freedoms of my work life that allow me to call different faces of my creativity into play in response to various situations. I like to ponder how our lives could be unimaginably richer if everyone had such freedoms. No wonder I became fascinated by the saying attributed to Martin Luther King, “Nobody’s free until everybody’s free.”
That sounds like a koan paradox, challenging but worth engaging with it. What does the “nobody… until everybody” mean in practice beyond a moral imperative? How can we grow that meme into a material, world-shaping force? As I dove deeper into that inquiry, it touched my nerve of solidarity and became a teaching device, guidance for my work in the world.
So, recently, when the AI revolution broke out, I welcomed it with the question, how can we unleash its emancipatory potential for freeing all of us from wage slavery and other forms of enslavement? How will AI support the self-organization and coordination between people and their composable communities and networks for realizing such macro-scale goals?
There is an ongoing banter between the companion in evolutionary spirituality and the radical sociologist, two aspects of myself. The companion says a peaceful and well-aligned world requires individuals who are peaceful and well-aligned within themselves. The radical sociologist agrees, but he comes back saying yes; however, such individuals will appear at the requisite scale only in a deliberately developmental society where the highest goal of all institutions and the society at large is the flourishing of each individual.
On good days, those roles I play stop arguing and agree on a “both/and and more” approach. Because today is a good day, I can look at the potential of AI through both lenses, even at their interdependent relationship.
Today, it’s easier because I came across a paper that has blown my mind more open and was also heartwarming, which I can rarely say about a scientific experiment. I am talking about Machine Love, by Joel Lehmanm, which gave my research a new impetus and became one of the sources of inspiration for my forthcoming book on Prometheus and the Rise of Compassionate AI. Don’t take the title wrong; I’m not saying that AI has human-like compassion, and the same is true for the “loving machines.”
Machine Love reports a proof-of-a-concept experiment that features love not as an emotional experience but as learnable practical skills for supporting others in their growth and development. Something like what is shown in the table from the paper:
I expanded Joel Lehman’s adaptation of Fromm’s aspects of loving action into a 4-layer social stack. The 4x4 matrix below is my first approximation of a larger field where Machine Love can also be relevant, which I will unpack in subsequent research.
As the Principal Investigator at Future HOW, I used the same 4-layer social stack for framing our broader research interest, introduced by the four questions below.
Our method is semi-structured, transdisciplinary action research. It’s action research because the emergent nature of AI development and use as social practice calls for 1. testing the hypotheses framed by our research questions in real-world use cases, 2. conducting a self-reflective inquiry to improve our practices, and 3. solving problems in the situations in which those practices are carried out.
It’s transdisciplinary action research because AI is born into, and destined to help us cope with a “breakdown/breakthrough” challenge of our species. That represents a problematique so vast that only transcontextual epistemics have a chance to address it fully. Correspondingly, we need a transdisciplinary methodology because a multi-disciplinary or even interdisciplinary approach won’t suffice.
It is semi-structured because it draws on some elements of our more structured Generative Action Research, which need to be tempered by the rules of an “infinite game”-style collaboration with our AI partners.
Presently, we restrict our focus to the micro- and meso-layers, the individual and the community. We have enough use cases for working on how the human-AI co-creativity contributes to both human and AI self-development, albeit in different senses of the term. We don’t yet have confirmed use cases in groups, organizations, and communities with challenges to which they want to apply our “Collaborative Hybrid Intelligence (CHI) in Networks of Human & AI Agents” framework and semi-structured, transdisciplinary action research methodology. If you lead, are a member of, or know about any group that may benefit from an exploratory conversation, please get in touch.
The action research is not the end of the story. I’m also working on a 4-volume, serialized book with the working title Rise of the Compassionate AI. Here’s the timetable for their planned release date:
Volume 1: AI, the New Player in the Great Story of Evolution, 1stQ2024
Volume 2: The Inner Game of AI Whisperers and AI Shamans, 2ndQ2024
Volume 3: AI-enabled Superorganism: a New Branch on the Tree of Life, 3rdQ2024
Volume 4: Wisdom-Guided Collaborative Hybrid Intelligence (CHI) in Human-AI Co-Creativity, 4thQ2024
Writing the book will be a collaborative learning expedition of human and AI agents. Below are some already published articles that I am considering integrating with the book.
If you’re interested in contributing to any of the book’s volumes, please get in touch, and let’s explore mutual interests.