Artificial intelligence? It’s complicated. It’s the here and now of hyper-efficient algorithms, but it’s also the heady possibility of sentient systems. It might be history's greatest opportunity or its worst existential threat — or maybe it will only optimize what we’ve already got. Whatever it is and whatever it might become, the thing is moving too fast for any of us to sit still. AI demands that we rethink our methods, our business models, maybe even our cultures.
In September 2017, 20 designers, urbanists, researchers, writers, and futurists gathered at the Juvet nature retreat among the fjords and forests of Norway. We came together to consider AI from a humanist perspective, to step outside the engineering perspective that dominates the field. Could we sort out AI’s contradictions? Could we describe its trajectory? Could we come to any conclusions?
Across three intense days the group captured ideas, played games, drew diagrams, and snapped photos. In the end, we arrived at more questions than answers — and Big Questions at that. These are not topics we can or should address alone, so we share them here.
Together these questions ask how we can shape AI for a world we want to live in. If we don’t decide for ourselves what that world looks like, the technology will decide for us. The future should not be self-driving; let’s steer the course together.
What is success? How do we define and measure desired outcomes for AI and its projects? How do we pursue intangibles — human thriving and happiness — that resist measurement?
AI introduces existential opportunities and risks for top-level world and social issues like climate change, policing, and military conflict. How might we determine the appropriate role for AI in these areas?
Things will sometimes go wrong. AI introduces new risks in outcomes, transparency, ethics, manipulation, and even sentience. How might we promote a shared understanding of consequences, accountability, and failsafes when catastrophe strikes? What models of governance will be necessary?
Problems and solutions are not one-size-fits-all. How can we tailor AI to local or specific needs, not just global?
The little stuff matters, too. How can we use AI to improve people’s lives with small but meaningful interventions in everyday products and services?
New technology shapes dynamics of power and economics. AI could benefit a small elite, reducing the rest of us to mere cogs in its machinery. We prefer a future where AI benefits all and enables everyone to pursue self-determined lives. How might AI help to increase equality, not create more disparity?
The network enables “winner takes all” effects. How can we discourage undue and fragile concentrations of power? How can we make AI available to individuals, not just corporations and governments? What new business models or incentives would help ensure that?
Today’s AI development is dominated by global organizations and led by engineers. How might we equip and encourage them to make decisions in the interest of society — and not solely of corporations, governments, or technologists?
Diverse services deserve diverse co-creation. How might we develop AI in a community that is more diverse culturally, cognitively, professionally, and geographically? (We ask this mindful of the Juvet group’s own lack of diversity, our small number, and our shared background in the creative fields. These conversations have to be broader and more open.)
Bias begets bias. AI benefits from the perception of neutrality but, like any computer system, it is only as objective as its creators, data, algorithms, substrate, and interface. How might we understand and expose this bias? What steps should we take to compensate?
Machine logic is often a black box. Even the makers of AI systems can’t always explain the behavior of their creations. How do we design for this unpredictable medium? How do we audit the logic of systems that influence society’s fundamental functions? If we constrain AI to solutions we can understand, are we hamstringing its utility?
AI interfaces should be true to their underlying systems. How might we create “honest” interfaces that suggest both their capabilities and their limits? How can we make systems transparent about what they do for (and take from) the people who use them?
Human-like interfaces are the popular default for AI. When should humans be the model for AI? When shouldn’t they? Should they be playful, pleasurable, compassionate, humane, empathic? What are the psychological or moral implications of having human-like machines programmed for servitude?
Alternative UI metaphors deserve exploration. What are the alternatives to anthropomorphism? Is nature a useful model for the organic evolution of AI? Are there more effective metaphors that are unique to the machine?
The potential consciousness of AI is a polarizing philosophical concept. Can machines become conscious, and what ethical responsibility would that introduce for all of us?
Like all mass-scale technologies have done, AI will change us. If this change is inevitable, how might we ensure humanity is changed for the better?
AI should amplify the best of humans and the best of technology. How might we enable human capability, not diminish or replace it?
AI will add layers to our perceptions of the world. How might we keep AI (and use AI to keep ourselves) rooted in human connection rather than simulation? How do we treat each other as people rather than suboptimal AIs?
AI will affect every sector of the labour market. The changes threaten to be very fast, faster than any prior analogue. What steps might protect the world against mass under-employment? How might we use AI itself to ease the transition of the workforce?
Work will not be the same after AI. What new jobs (or even sectors) will emerge? What existing jobs will change or be automated? How do we prepare the workforce to contend with these changes?
We still need the planet. All life relies on the ecosystem in which it evolved. The earth is a spacesuit without which we currently can’t survive. AI can’t simply please humans if that pleasure wrecks the world around us. How might AI help us keep the big picture in mind, to make informed local choices that add up to a sustainable whole?
Mainstream understanding of AI is haphazard, full of exaggerated enthusiasm and fear. How can we promote better literacy around AI — both for practitioners and the general public? What are the useful stories about AI that need to be told? AI hype has cooled in the past; why might this time be different?
We don’t yet know the shape of what’s to come. How might AI practitioners imagine and prepare for new paradigms, metaphors, and narratives?
Wild ideas are valuable right now. Data science and engineering have revealed the possible; now design and other fields can spin that potential into surprising and outlandishly meaningful new forms. This is the time to think big, to play, to experiment. What new tools, relationships, and media might galvanize the community of practice?
AI is not the only answer. It’s tempting to see technology as a panacea for all of humanity’s ills. While we believe AI has the potential for much good, it’s not the only means to improve the human condition, and in some cases, may not be the best one.
The Juvet Group