Report: AI Threats To The Art World


Twenty months into the global focus on artificial intelligence (AI) and its increasing presence in daily life, the nature and scale of the AI challenge to artists and art institutions, and to academics seeking transparent access to information, has become clearer following the publication of the seventh annual edition of Stanford University’s Artificial Intelligence Index Annual Report.

The report, the most comprehensive research study of the state of AI, reveals an industry increasingly dominated, and monetised, by the tech giants Google, Microsoft, OpenAI and Meta, and where the cost of advanced breakthroughs in the functioning of AI—once made by academic institutions—has become prohibitively expensive. Meanwhile, it finds that new closed, proprietary, AI models—whose lack of transparency is a source of increasing concern to the tech giants themselves—have been outperforming the capabilities of open-source models, which are fully available to academics, artists and developers.

Existential challenges

Both art institutions and artists face existential challenges in negotiating this AI universe and its emerging financial model. Art-world experts consulted by The Art Newspaper have offered a mix of pragmatic, creative, combative and hopeful responses to the report and to the state of the AI industry.

The museum sector is at a technological tipping point and will soon have to engage with industry giants such as Google to disseminate information and data, the museum director Thomas Campbell told The Art Newspaper in Hong Kong earlier this year. “It’s just a matter of months before these systems are going to be telling you about Monet, Medieval tapestries or Damien Hirst,” he said. “They’re going to be doing that whether we are participating or not.”

The challenges of cost and accessibility, and of protecting intellectual property that have emerged in the age of AI, and the financial challenges written into the Stanford report, are far from unique to the art world. But for museums, some of the biggest questions are those of distance and control; how they can harness the power of AI to classify and offer new insights into their collections, as a partner or licensee of tech giants, without finding themselves further away from the audience that is looking to access and understand their art and activities—distanced by the summarising power of AI chatbots. (A challenge of AI “distancing” is one that news publications face this month with Google rolling out, first in the US, an AI summary in response to word search, in favour of the longstanding interface that ranked stories while displaying links back to their sources.)

The Future Art Ecosystems (FAE) team at Serpentine, in London, which has been leading research for the past decade on how cultural institutions work with AI, takes a less binary view. “Lines of power distribution are still being drawn”, the team tells The Art Newspaper, because of legal challenges—largely on copyright to generative AI models that depend on scraping vast amounts of image and text data from the internet—and because of growing public awareness around the “cultural, regulatory and ownership interests” attached to the functioning of leading AI models such as Chat GPT-4 and Stable Diffusion.

The FAE team, which published Future Art Ecosystems 4: Art x Public AI (FAE 4), their fourth annual report aimed at encouraging new thinking and collaboration around the interaction between art and technology, says, “The mandate of cultural institutions is to make informed decisions that serve the public interest. This does not mean there should be an absolute embargo on partnering with large corporate actors, but the terms of that partnership should benefit the public” above and beyond whether they have access to advanced AI or not.

How to work with big tech

At the heart of the 2024 report is an emerging dilemma that every artist and every art institution seeking to engage AI will have to face: will that engagement be through and with AI created by the world’s giant technology companies, or will they look to develop their own, or work with open-source models freely available on the web?

Access to, and control of, the technologies of production is a critical part of artistic, democratic and institutional freedom. AI’s complexity and cost mean that disparities between proprietary “closed” technologies and the open sharing and re-use of technology, data and ideas will likely increase—as may the impact of those disparities on democratic and creative freedoms.

According to the Stanford report, the most dramatic breakthroughs in AI in the past year have been made in the closed, proprietary, approach, and as the report’s editor-in-chief Nestor Maslej said: “If it’s the case that closed developers are substantially outpacing or substantially outperforming developers that are open, this could have a lot of implications for how democratic and how widely distributed the benefits of the AI revolution could possibly be.”

The Artificial Intelligence Index Report shows how rapidly the dynamics of AI development are changing. In the 18 months since Chat GPT-3 was released, AI has bypassed human capability in entire task categories, including some in categorising images, in visual reasoning and understanding English. Crucially, this capability is starting to have decisive outcomes in science, where new AI applications such as GNoME, which helps the process of materials discovery, have emerged in the last year.

Driving these advances is corporate investment in proprietary models—and the cost of those advances is growing. Industry now clearly leads advanced, or “frontier”, AI research, the cost of which is moving beyond the capacity of states or academic institutions to lead. The cost of training a “frontier” AI model—one which may do or discover something new—is already over $100m and will increase. These costs are driving ever-larger investment rounds, with over $25.2bn of private investment in the last year.

However, the report’s authors point out that university researchers, who have been sidelined financially from the recent AI breakthroughs—dominated by the tech giants—may regain their place in the vanguard through research breakthroughs, not least in the fields of efficiency in how data can be used, the kind of breakthrough that might change the stakes in the innovation “space race”. It also highlights that 2023 saw the launch of 21 notable AI models through industry/academic collaborations, “a new high”.

The main social outcome of this so far is anxiety. The world is noticing the incursion of AI into everyday life and getting increasingly nervous about it —with recent data from the Pew Research Center showing 52% of people were more concerned than excited about AI, up from 37% two years earlier. If the gap between proprietary technologies and those made directly by or openly (and fully) available to artists and institutions continues to grow, what might that mean for what we understand an AI artist to be? And how can art institutions maintain the trust and data integrity they depend on to fulfil their public roles?

The emerging dilemma about the power and ownership of AI means we may already have passed a threshold where what it meant to be an “AI artist” for the last 50 years is obsolete, and what it will mean has not yet emerged.

Harold Cohen is commonly accepted as the first AI artist and others have since followed his version of what an “AI artist” is. In the late 1960s Cohen created AARON, the software that paints his paintings. Cohen was both technology creator and collaborator with AARON. Cohen made his AI model in his own artistic image, AARON creating in the painterly style Cohen had worked in successfully for some years. AARON and Cohen became inseparable from each other.

This conjoined role—humans making the AI, and then acting as creative collaborators with the technology they made—has been the basis of what we have meant by an “AI artist” for the last 50 years. It is a model beneath the last decade’s generation of artists whose use of AI has brought them, and the wider medium, to prominence.

The notable AI artists of the last decade—Refik Anadol, Mario Klingemann and others—have worked primarily with a type of AI called GANs (generative adversarial networks). GANs are technologically complex—but within reach of technically skilled, independent artists—but they are not as complex or capable as the new generation of Generative AIs unleashed since 2022.

Take an example such as Jake Elwes’s 2019 Zizi—Queering the Dataset, which brilliantly skewered the biases in facial recognition systems by injecting 1,000 images of drag and gender fluid faces into a 70,000-strong image set used to “train” an AI model—so as to reimagine what “normal” looks like.

Anadol, like many other notable AI artists, spent time experimenting with AI at Google, with a residency at Artists and Machine Intelligence (AMI) in 2016, but his approach has ultimately been the same as Cohen’s: to build his own technologies and then collaborate with the technology to produce work.

But Generative AI has shifted the dynamics of AI’s technical complexity and its potential for creativity, radically and at speed. GANs work from relatively small bodies of data to complete very specific tasks. Generative AI builds from millions, hundreds of millions and billions of piece of data—with outputs in multiple formats—and is only at the beginning of being explored. It has evolved at such speed in the last 20 months that previous generations of AI have been completely left behind.

This year Anadol has become the first significant figure to move his production—maintaining his role as artist and technology creator—into this much more complex, more expensive domain His recent project for the World Economic Forum in Davos, later shown at the Serpentine in London, is based on millions of images, sound and text inspired by data of flora, fungi and fauna from over 16 rainforest locations globally.

Anadol aims to use his Large Nature Model AI to power Dataland, a “museum and Web3 platform dedicated to data visualisation and AI arts”, which he plans to launch in 2025. The model, he says, will be made available for re-use as open source for educational and research purposes and is “trained on the most extensive, ethically collected dataset of the natural world”.

This is a grand aspiration. The artist not just as creator, but as provider of both technology, data, tools and virtual and physical space for others. To achieve this he received cloud computing backing from Google and AI research support from the chipmaker Nvidia. As he tells The Art Newspaper, “It’s very challenging research… you can’t do it without the support of a tech pioneer; you need those computing and AI research resources.”

Do artists need the most advanced AI?

Serpentine has been addressing these problems. In the fourth volume of their Future Art Ecosystems report, they design a model for how art institutions can create, manage and license data that might be used as training materials for, and accessed by, AI. Using the example of a forthcoming exhibition with Holly Herndon and Mat Dryhurst, they map out how the critical creative data to be used in an exhibition can be put into a “data trust”, with external governance under the guidance of a data trustee to manage and oversee its usage.

It is unclear whether [AI] models that are advanced by industry standards are necessarily fit for purpose for cultural organisations or artists

Future Art Ecosystems, Serpentine

They wonder whether the most advanced AI is something that should preoccupy the art world. “It is unclear whether [AI] models that are advanced by industry standards are necessarily fit for purpose for cultural organisations or artists, which might want to encourage a narrower and more experimental use of AI systems,” the group says.

However advanced the technical level that Anadol works at, the challenge for visibility will be how his offering holds up against the tidal wave of creative content already being unleashed by the proprietary image, video and sound making tools of the first generation of Generative AI tools. Anadol’s model, for all its likely virtues, will not have the scale of the models being built by Google, Open AI, Microsoft and others.

An imbalance of access

If artists have only limited access to AI models of the greatest scale and power, we may face the same significant imbalance of access to resources that has hindered both cinema and games development as distinctive art forms.

We will never know what the legendary film directors Ingmar Bergman or Yasujirō Ozu could have done with the budget of Ben Hur and the power of the Hollywood studios behind them. And the leading games franchises have largely kept independent games-making artistry hidden in niches. So it is highly possible we may never know what the most talented artists might do if given the chance to build an advanced “frontier” AI to their own design, or if they were free to transform the models developed by big tech or big science, without the particular—and sometimes peculiar—ethical and content constraints those companies will design in.

The dilemma for artists is existential—what is an artist in a world of AI? The dilemma for art institutions is distance and control. Distance in this context is driven by the chatbots and other interfaces that AI favours. As these become more central to user interfaces, art institutions risk being displaced from the direct points of digital contact the web and social media have recently given them. AI interfaces are points of interpretation, not for presentation of facts, or even often of links like search engines. Large-scale data and information providers may increasingly find themselves at a distance from the audience looking to access and understand their art and activity.

That dilemma is even more acute around the crucial question of control. Control here means how AI accesses institutional data, and what data is used to train AI models. With court cases ongoing, led by The New York Times and others, to recoup licensing income lost when data was used to train some generative AI models, institutions must decide whether to let their collections data be used, likely for small amounts of money, if commercially at all, so that its insights form part of the material that AIs share with users. And if their collections and content are not part of the training models used by tomorrow’s AI, they risk invisibility and anonymity.

At a national level in the UK, the Arts and Humanities Research Council is providing funding to harness the impacts of AI around the cultural domain. Announced in January, it is funding two multi-year programmes related to AI, a set of one-year projects focused on building networks and scoping future research related to AI ethics, and a multitude of other projects. The lead programme, Enabling a Responsible AI Ecosystem, a partnership with the Ada Lovelace Institute, aims to connect policy and practice through collaborative research and to incentivise responsible and ethical innovations in the development and use of AI and data-driven technologies to increase public understanding, trust and acceptance.

As the impact of Generative AI emerges, the art world is in complex, ambiguous territory. The democratising power of art, and the role of art institutions in sharing that art, are challenged as the tools needed to prosper in the digital age become more complex.

In this febrile atmosphere, research such as the Artificial Intelligence Index Report is designed to offer perspectiveand shape; a space in which to think. As Maslej, its editor-in-chief, said, when discussing the report on the Data Exchange podcast, the AI industry moves in “sometimes weird and surprising ways”. “I think,” he said, “that the jury is still out.”



Source link

About The Author

Scroll to Top