Artists concerned over AI as Ottawa talks regulations

When Vancouver-based artist Nicholas Kole first saw his own artwork copied by an AI image generator and posted by someone else, it devastated him.

“That really (broke) my heart, seeing that made my heart drop in a big way,” he told BNNBloomberg.ca in an interview.

Kole is a well-established digital artist who works with big-name clients like Disney, Dreamworks, Activision-Blizzard and Nintendo, among others. He publicly shares his work on ArtStation.com, a platform where artists in the digital or gaming art industry host portfolios and connect with each other.

This summer, the website’s “trending” page was filled with the original work of millions of artists, from character designs to detailed landscapes that look like they came out of a fantasy novel.

Interwoven among these artworks, however, was the same repeated text-based graphic: “No to AI Generated Images” with a red prohibition symbol over the letters “AI.”

It was a protest by artists on ArtStation against the display of AI-generated images on the platform. Kole, whose art has been scraped – or taken – by AI and recreated elsewhere, started the movement, along with other artist colleagues.

He said the website is meant to feature original artwork by skilled professionals – and he wants to keep it that way. Seeing his art copied by AI generators has raised copyright concerns for some of his work clients, and he said it has been painful to see copies of his art appear online without his consent.

“The stuff that hurt me the most was seeing some of the personal work I’ve been doing over the last couple of decades – characters in a world that I invented, a story of my own – that has been imported wholesale into the dataset, that is now just part of this grist for the mill of mashing things up,” Kole said.

Kole’s AI protest on ArtStation was not the first, and it may not be the last.

New artificial intelligence technology has prompted excitement over its potential uses, but along with that have come concerns about how to regulate the new technology – particularly for visual artists, who fear their profession and lines of income have come under threat.

New image-generating, open access – or free and publicly available – AI sites like Midjourney, LAION-5, Dall-E and Stable Diffusion have seen their popularity skyrocket, with many users discovering how easy it has become for them to create their own digital visual art.

Artists are worried about this ease of digital art creation, saying it raises concerns about copyright, fair use and their belief in what art is all about.

Theses artists’ concerns, along with other considerations about AI’s uses and potential abuses, will be top of mind this week as a parliamentary committee holds hearings to study a proposed law that would lay out regulations for AI in Canada.

HOW DO AI IMAGE GENERATORS WORK?

An AI image generator creates an image by means of its database, which consists of a large amount of data and images taken from all over the internet through a process called “scraping.” From Vincent Van Gogh’s “Starry Night” to a local artist’s online portfolio, AI generators scrape what’s visible online and train themselves to make any type of image from just a text prompt.

By simply typing commands into an AI image generator, anyone can create their own artwork with the click of a mouse – but the image it spits out would be an amalgamation of other artists’ original styles and artworks.

WHY ARE ARTISTS CONCERNED?

These original artworks are not in the public domain, despite being on the internet, so the rise of image generators begs the question: is this use of AI image generation completely ethical? Is it okay to use an artist’s name or original art piece, put it through AI to doctor it up a bit, and then share it as your own artwork?

AI scraping has affected artists around the world who work in a variety of mediums, and some of them are pushing back.

Visual concept artists and illustrators have filed lawsuits against Midjourney and Stability AI for scraping their art and the art of others.

In music world, a TikTok user used AI to create a new song seemingly sung by Canadian superstars Drake and The Weeknd, which was then taken down for violating copyright.

And in Hollywood, strikes by actors and writers that shut down the industry this year have been protesting against not only poor wages, but also against the use of AI to insert computer-generated versions of their faces and writing into films.

REGULATIONS: WHAT CAN BE DONE?

So, what is being done about all these concerns, and can the law keep up with the intense speed of AI growth?

In Canada, proposed regulations for artificial intelligence technology are set to be discussed by a parliamentary committee in Ottawa starting on Tuesday.

The proposed regulations are contained in the Artificial Intelligence and Data Act (AIDA), which is part of Bill C-27 or the Digital Charter Implementation Act.

This Act, which has not yet become law, is meant to act as step one in Canada’s effort to “guide AI innovation in a positive direction” and encourage responsible adoption of AI.

AIDA aims to ensure that “high-impact” AI systems – defined in the legislation mainly as systems that pose harmful risks to human rights, health and safety – meet Canadian safety and human rights expectations. It also prohibits “reckless and malicious” uses of AI and strives to ensure that AI technology and regulatory policies develop alongside each other at the same speed.

AI development, though, is moving faster than the legislation itself. Upon first glance, it may seem like the legislation appears to only cover larger-scale, higher-impact AI systems, and may gloss over a more detailed issue such as people on the internet using open access AI software to steal the original artworks of artists, including Canadian ones.

Andrew Di Lullo, Canadian lawyer and President of Listrom Di Lullo Professional Corporation, said the way in which the Act defines “artificial intelligence systems”, “high impact systems,” “biased output” and “harm” all play a part in whether sites like Midjourney, Stable Diffusion and ChatGPT will be regulated by AIDA.

“(The details) are unclear at this time, hence why the Act is garnering so much public and expert scrutiny,” he told BNNBloomberg.ca in an email. That murkiness makes it unclear whether open-access, image-generating AI sites will be covered by Bill C-27. 

Di Lullo said those sites might be covered by Canada’s pending legislation, since the bill’s definition of “high impact systems” is “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.”

The distinction between AI models and systems outlined in the legislation is only about making sure that the AI industry and researchers can train an open-access model without worrying if their dataset will produce output that is biased, Di Lullo explained, in an attempt to prevent the models from generating sexist, racist or other discriminatory content.

It is unclear what reach the legislation would have to regulate companies, but Di Lullo said those details could be hammered out in regulations by Canada’s innovation, science and industry minister if the bill eventually becomes law.

“It is actually unclear if the AIDA will empower the minister to regulate Midjourney and there is reason to believe the minister could try,” he says.

WHAT ARE CANADA AND OTHER JURISDICTIONS DOING?

In Canada, the Artificial Intelligence Governance and Safety (AIGS) is a non-profit organization dedicated to the topic of AI safety, with contributors from all over the world. Di Lullo is a member and legal professional within the group.

Wyatt Tessari L’Allié, spokesperson for AIGS, said society needs to plan ahead to stay on top of threats the rapidly evolving technology could pose to workers.  

“While generative AI is all the rage right now, it’s likely a passing phase. New systems able to create art without training on artists’ copyrighted material will eventually be possible, and that will make many of the current concerns moot,” Tessari L’Allié told BNNBloomberg.ca in an email.

“The bigger picture is that with AI getting better by the day, the list of things only a human being can do is rapidly shrinking, meaning that all human labour could become uneconomical in the not-too-distant future. As a society we need to think ahead, and make sure that a transition to a jobless world doesn’t trample on people’s rights or leave anyone behind.”

In Europe, the European Guild of Artists Against AI (EGAIR) represents artists who have had their original work scraped, copied and stolen by artificial intelligence and its users. The group works to protect creators’ intellectual property and advocates for the imposition of new laws to help regulate AI and prevent exploitation of artists’ work.

In an interview with BNNBloomberg.ca, artist Eva Toorenent, EGAIR Advisor for Netherlands, says that she has seen a significant drop – up to 80 per cent – in artists’ sales due to the rise of image-generating AI.

As for the professional gaming art industry, Kole said he cannot pinpoint an exact statistic or number to speak to the issue – but he said his industry colleagues have had bad experiences with AI in the art space.

Without regulation, he worries that companies could stop paying their artists and use only AI.

“It’s scary,” he said. “The temptation to experiment with (AI) is there, from the perspective of those people who see this primarily as a business or as numbers going up and down, and not as an art form or a career.”

In terms of solutions, Kole suggested an “opt in, opt out” method as an ideal way to regulate AI and prevent theft through scraping – giving artists a choice about whether their work is included in AI datasets.

This would be preferable to a model he takes issue with, which puts the onus on artists to ask companies to remove their original artwork when it is used in a dataset without the artist’s consent.

“I think regulation needs to begin at the data gathering stage,” he said. “Artists should be assumed not to want to be participating.”

Another helpful source cited by both Kole and Toorenent is Glaze, a software created by computer scientists at the University of Chicago, which applies an invisible filter over an image, preventing AI models from reading or analysing the real image.

Kole said the tool is helpful, but it doesn’t solve the issue at hand.

CREATIVE CONCERNS

On top of professional and copyright concerns, Kole made the case that using AI to generate art depersonalises it.

“Those of us who have been drawn to this as a career have been wanting and yearning to create something beautiful, specific and expressive, and the people who are in it just to make a number go up on a balance sheet aren’t as motivated to do that,” he said. “You see that reflected in the magnetic draw of AI for them.”

In Europe, Toorenent and her colleagues at EGAIR are fighting the same fight – one that she and Kole agrees is a fight that shouldn’t exist.

“We don’t want to have to do this,” she said. “We just want to make art.”

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, everyday.

We don’t spam! Read our [link]privacy policy[/link] for more info.