At its inception, the MUTEK Forum was held 6 months before the festival. In 2018, the two events have been grafted together, offering a unique perspective on digital creativity. Programmed by Sarah Mackenzie and hosted by Claudine Hubert, the 9th edition is entitled “Courants d’avenir” and will be held all week long at Les 7 doigts de la main. MUTEK offers us the chance to delve into a wide range of contemporary themes: the relationship between culture, technology and the climate crisis; accessibility and inclusion within immersive technologies; the power of tech; art, governance and artificial intelligence; and the future of festivals. Here’s a report on the second day’s main conference, which focused on artificial intelligence.
Crédits photos : Maryse Boyce
Conference
Shifting narratives of AI : confronting tech’s power
Sarah Myers West – AI Now Institute
“We are at a moment when critical work must not be reduced to worst-case scenarios, but can be firmly rooted in its origins, in the possibility of an alternative vision of a world where small-scale democracy is possible.”
Sarah Myers West’s words struck a chord. Her message is clear: artists and creative workers have an essential role to play in addressing the issues raised by artificial intelligence (AI) and in shaping the world we want to live in.
AI is a hot topic, and the term is becoming overused, as the researcher reminds us, starting by questioning the appellation itself. The term artificial intelligence is often used as a marketing tool. It’s a “floating signifier” filled with ideas and visions, detached from a material and above all technical reality. In other words, we lend AI powers it doesn’t necessarily have. A whole imaginary world has been created around it, largely nourished by the great works of science fiction.
Artificial intelligence is also a term sometimes used to refer to applied statistics and linear regression. Then, Sarah Myers West quotes the definition of AI given by American AI ethics researcher Meredith Whittaker. This technology, since it is fed by user data and used commercially, can also be defined as a form of surveillance by-product. In this respect, it’s important to point out that not only are companies lacking in transparency about the provenance of the data they use to train AI models, disregarding issues of copyright and intellectual property.
Faced with the rise of AI and, above all, the desire of companies to develop these models on a large scale – which causes environmental and discriminatory problems and affects workers – Sarah Myers West reminds us that there are other possible trajectories.
Meaningful change requires tackling different forms of advantage:
- The data advantage: information asymmetry between companies and the public
- The computational advantage: dependence on infrastructure, hardware and software
- The geopolitical advantage: framed by (the absence of?) regulation, and governments that support the development of AI as a strategic and economic asset
Going beyond the regulatory framework of public policies
Negotiations to regulate AI in the USA, Canada and the European Union are underway, but for the time being, safety is a priority, rather than the issue of algorithmic bias and discrimination. To date, we still lack information on the data used to train models like GPT-4, and Sarah Myers West reminds us that we can’t take companies at their word when they tell us they know what they’re doing. So far, they’ve proved that they’re ready to market their technologies even if they’re not.
Mechanisms need to be put in place to hold companies accountable for their actions. And the Frontier Model Forum, “a new industry body to promote the safe and responsible development of cutting-edge AI systems” launched by Anthropic, Google, Microsoft and OpenAI, isn’t enough.
How can we take action and make our voices heard? We need to confront the concentration of corporate power and get organized, says Sarah Myers West. Workers, creative workers and artists are at the heart of the resistance to these tech giants. They are in a position, collectively, to create leverage to ensure that AI is not used to devalue their work. The most recent strike by WGA authors is an example of this struggle.
Not wanting to hear about AI is one thing, but what’s certain is that the train has left the station and you’d better be ready to ride it, to be able to act collectively.