Yellow critters - Memo Akten, Distributed Consciousness (installation view) ACMI (photo by Phoebe Powell)
Memo Akten, Distributed Consciousness, installation view, ACMI (photo by Phoebe Powell)
Stories & Ideas

Tue 28 Nov 2023

AI & Art – Memo Akten, Rita Arrigo & Rebecca Giblin in conversation

Art Artificial intelligence Industry Talk
ACMI author icon

ACMI

Your museum of screen culture

At a time when much of the discussion about Artificial Intelligence is negative or fearful, we hear from experts who are exploring its potential.

On 21 September 2023 at ACMI, Memo Akten, Rita Arrigo and Rebecca Giblin, and host Elizabeth Flux, discussed Akten's work Distributed Consciousness, and the intersection between art, technology, activism and culture. Learn how Artificial Intelligence is changing creative practices and redefining traditional concepts of the arts and STEM (science, technology, engineering and mathematics) disciplines.

Read the transcript.

Highlights

You might also like

Join our newsletter

Get updates on the latest news, exhibitions, programs, special offers and more.

Transcript

This transcript was machine-generated and published for search and accessibility purposes. It may contain errors.

ACMI acknowledges the traditional owners, the Wurundjeri and the Bunurung people of the Kulin Nation on whose land we meet tonight. We pay our respects to elders past and present and extend our respect to Aboriginal and Torres Strait Islander people from all nations of this land. So welcome to AI and Art, Exploring the Connections Between Creativity and Technology. I'm Liz Flux, the Arts Editor of The Age. I'll be chairing the discussion today. So it feels like AI is something we're hearing about increasingly more, exponentially more. There's not going to be a lot of audience participation, only two. This is one of them. And at the end, there's the opportunity to ask questions. That's the other one. But who here has heard about AI in the last week or two weeks, whether that's in news or in conversation, just a show of hands? All right. So you're in the right place. That's good. So of those people, who has heard about it in a positive light? Who's heard of it in a negative light? And who's heard of it in a neutral light? All right. So that's a cross section. Interesting. So tonight, we're going to get stuck into the truth of things, hopefully clear up some misconceptions and paint a more clear picture. So what it means to what AI means to the art world, not just now, but going forward. But so to do so, let me introduce our panel. Memo Atkin is an artist creating speculative simulations and data dramatizations, exploring intricacies of human machine entanglements, perception and states of consciousness, the tensions between ecology, technology, science and spirituality, using AI to reflect on the human condition. Rebecca Giblin is co-author of Chokepoint Capitalism, director of the Intellectual Property Research Institute of Australia and a professor at Melbourne Law School, where she works on questions of questions at the intersection of law and culture, particularly creators rights and access to knowledge and culture. Rita Arrigo is a renowned digital strategist with a reputation for her ability to lead digital transformation project in the public and private sector and a passion for AI and emerging technology. So if you haven't seen it yet, Memo Atkin Distributed Consciousness is located in Gallery 1. I'm putting that way because that's the door. I don't know the actual direction. It's inside the story of the Moving Image exhibition and generously supported by Naomi Milgram AC and the Naomi Milgram Foundation. So to kick off our discussion, Memo is going to talk us through this work. So over to you. Thank you very much and thank you all for coming. I'd like to try to quickly whiz through some of the themes behind the work. There are many layers to the work and so I'll just summarize them briefly and we may or may not come back to them in more detail. So on one hand, I made the work in 2021. That's when it kind of started really. And it was a response to the explosions of NFTs in 2021, if you might remember, and the so-called Web 3 movement centered around decentralization and distributed computation and all the ideologies that came along with Web 3 and blockchains, et cetera. But it's also a response and perhaps more so to the climate crisis, the general ecological devastation, mass extinctions, et cetera, that our civilization and our way of life is responsible for and our inability to mobilize, to take action. And the work employs cephalopods and cephalopod cognition and their distributed nervous system and their distributed intelligence as a means of reflecting upon the increasingly pervasive synthetic alien intelligences that we're building that we call AI. And especially those or the perspectives to do with scraping the Internet and our collective consciousness that exists on the Internet. But also trying to draw attention to the distributed nature of intelligence and knowledge in general in that all intelligence and knowledge is distributed collaborative and collective. And the boundaries between individuals and species and living and nonliving systems are more permeable and dynamic than we might think. So the work draws parallels between the distributed cognition of cephalopods with the distributed computation performed by smart contract-based blockchains. And ultimately the work is, as we face the challenges faced by the climate crisis and general ecological devastation, the work invites us to meditate on our relationship with all the living and nonliving beings that we share our planet with, and we are invited towards a decentering of human exceptionalism and invited to let go of the dangerous dichotomy of man versus nature. And I use gendered language deliberately here. That has been embedded in our culture for so many years. And instead to embrace the interconnectedness of all living, nonliving human and nonhuman beings that we share the planet with. So those are some of the themes that I'm sure will come up and more in our chat now. Thank you. Wonderful. Thank you. So the work as it exists now couldn't exist without AI. So I guess I want to ask about technology has always shifted art. So is AI any different in the way that, say, photography has changed things in the way that anything has changed in the past? Whoever would like to go, you can all go if you like. I'll have a quick go. I do think that technology does dramatically shift art and we can see that through the kind of things that happen with the internet. I think that really changed the way we could find art, the way we could experience art and people starting to use it in very different ways. I think AI has a really significant difference to that and we're seeing a lot of artists kind of indulge in the capabilities of AI, particularly around the ability for generative AI. So you'll see that in your work, like the significant use of generative AI in the imagery as well as in the language that's being used. I would say that we've always made art from the tools of the time and we see it really obviously with the advent of digital and we saw much, much more sampling and mashing up and so on. But one really interesting difference, I think, with AI and particularly generative AI compared to earlier technologies is that there is... These technologies are not going to become sentient and they're not going to take half of all jobs. And so the hype men that are out there, and again I use the gendered language advisedly, that are telling us to worry about those threats, they're doing so for a reason. They want us to look over there. But I think that there are things that we should actually be thinking about which are over here and is going to drastically change the conditions of many kinds of labour markets, including creative labour markets. And I think one of the things that we really need to be focusing on, as Molly Crabapples pointed out, is that if technologies like image generators seriously disrupt the market for illustrators to actually work and to make a living, but they're built on, they're only possible because of training data that is from the work of pre-existing human creators, then that is a difference. And somebody said to me recently, I just found it so incredibly striking, oh, but you don't understand, artists are just like the buggy whip manufacturers. We don't need them anymore. And the fact that somebody said that out loud, it really just stopped me dead in my tracks and realised that I'm in such a bubble that that couldn't possibly have occurred to me. And one of the differences is that there is this risk, if there is a widespread perception that the kinds of outputs that can be generated by this are sufficient and that we don't need to, that we devalue artists more than we already have. And I think that's something we really need to be thinking carefully about as well. Do we want to live in a world that doesn't have artists and doesn't have fresh art? Because I'm just wondering what sort of world that person is imagining. If you don't need, what sort of art do they want to see out there or is it a world without art at all? I think it's a world that's very tightly controlled. Because if the machine is generating the thing rather than a human, that's something that they can understand and reduce to a binary of zeros and ones. But as soon as you add humans into the equation, it takes it into a broader power perhaps. You might even assume that's a very uneducated opinion as well, potentially. But I mean, it depends on what kind of education you're talking about. So this person was very highly educated in tech, for example, but you might say not as inculcated in the art world. But they're not a stupid person. But perhaps just a narrower focus. I also want to add the word AI is very complicated and amorphous. But even more amorphous than perhaps AI is the word art. And one of the really fascinating things about this discussion is people use the word art in very different meanings. Like I'm not making any value judgments here. But for example, someone who models a tree for a video game, their title is artist, that they're a 3D artist. Someone who does, I don't know, a backdrop for a film like a set, they're called an artist. Someone who makes an illustration for a book is an artist. A subway worker is a sandwich artist. Okay, that I was not aware of. But so musicians are called artists, actors are artists. And okay, we're all artists, but definitely the impact is going to be different. So I generally use, I don't say computer, they're going to make artists obsolete, because that's just too broad. But creative laborers, people who work, put labor in, in what we call the creative sectors, which is also a problematic term, I think, because doctors can be creative, lawyers could be creative, unfortunately, or fortunately. So creativity is nothing to do with art, really. You can be an artist without being creative, and you can be very successful, just apply the formula that works. But we call them creative sectors. So people who work in those fields, yes, there is going to be a lot of automation where maybe one person using software is going to be able to be more, produce more outputs than 10 people before. So going back to the original question, one thing that's so fascinating, because there's a lot of anti AI art hate out there on the internet, it mirrors the Luddite movement so much. In the early 1800s, when the Jacquard loom was invented, what it did is it allowed unskilled laborers to replace and do the work more efficiently of the skilled artisans. And the introduction of this technology did not benefit the artisans, it benefited the factory owners. And so they effectively exploited the labor of the artisans, and then they managed to just dump them and switch to unskilled laborers. And now, with generative AI, you can produce images without requiring the skill of drawing or painting. On one hand, this is democratization, which we can chat about separately, but the so I do think there's a lot of value in this. And it will allow many people to tell stories that they were not otherwise been able to tell. But for example, stability, the company behind stable diffusion, off the back of this huge pool of skilled artisans, and now worth what they were worth 10 billion or something, what are those people worth? Nothing that they're potentially losing their jobs. So this is really the heart of the problems, like where does the benefits of this in value, who does it benefit? And I think that that's really important to make those connections and to notice that the people who are funding these technologies, and trying to shape them are not people who are setting out to make the world better. They're venture capitalists who are setting out to make ever more money for themselves and for their investors. And that's exactly what they're going to be doing. They're not going to strip away half of all jobs, but they are going to be making work worse. So there's more surveillance that is a reduction of kind of creative control and discretion of humans, more reducing it so that you need less skill to do it. Because if you start with an artificially generated image, and then you don't actually have to hire a skilled illustrator to make it, you just have to hire somebody who's got more basic skills to come in and do the edits to it. They're not able to ask for as much as good pay and as good conditions. But there's definitely organisations and corporations and businesses that are looking to responsible AI. And I think a good example of that might be someone like Getty Images, who's about to launch a generative AI service that uses responsibly copyright images that have been paid for. Then you can then pay for these AI generated images. But I would then say look downstream at the conditions in which photographers licensing images to Getty Images work and the choke points that these other big corporations do. So even if they do licence the copyrights, and we saw this with Adobe, with the new version of the new generative AI version with Firefly, they said we've licenced all of the training data. And technically they did, right? But that was because they got all of the contributors to their stock images to sign over their rights in the small print. They didn't even realise what they were doing and then used all of this. And when those photographers and illustrators complained about it, they said, oh, we're going to maybe try and find a way that you get paid for it later, but you did technically agree to this. So there's ethically sourced and there's ethically sourced, I suppose, would be my response to that. So how do we protect creators? How do we, because the problem often seems to be people behind the scenes making things worse. So how do we protect people downstream? So there is some work going on around, there's the coalition of content preparation authority called C2PA2 that began in 2021. And it was to develop an open standard for indicating the origin of digital images and whether they were authentic or AI generated. And they've been, it all came through when we all saw the Pope in the puffer jacket. So that kind of started a lot of this discussion around it. And it was Mid Journey that did that. So there's a lot of companies that are now kind of trying to say, okay, if we've agreed to sign all AI art with a cryptographic watermark, Microsoft's one of them, and there's a range of other companies. So there is work ahead around trying to find these ways of being able to ensure that there's an indication of AI art that's going on with what you're producing. So yeah, but I know that from an Australian perspective, we still have a very different copyright law to the rest of the world. So you can probably attest to that as well. It's much tighter from what I understand. It varies in some ways, but I think that there are things outside of copyright, like labeling, I think is a really interesting one. And I think what we're going to see is the importance of this in the context of music in particular. Music's been a bit behind the text generators and the image generators, partly because there's a real uncanny valley issue where people can really tell when there's something just a tiny bit off in the music and very often the synthetic music is just a little bit too strange. But it is coming along now in leaps and bounds and there are certain genres that lend themselves better to synthetic music than others and particularly in the ambient space. We think about what the downstream implications of this is going to be. We see how Spotify is really trying to reduce its costs and it spends almost 70% of its revenue on licensing fees at the moment. It's already being caught out, prioritizing artists that it enters into special deals with fake artists who provide ambient music and so it prioritizes that music in its playlists and algorithmically delivers that into audience ears instead of music by people like Brian Eno who then get shifted off those playlists. We can see that there's very easy potential for them to be creating their own AI division, creating their own AI generated music and then putting that into listener ears now that we have outsourced to Spotify in so many cases the power to decide what we listen to. I think here we need to have solutions like labeling. There's nothing that's going to stop them from doing that in copyright. There's nothing that's going to stop them from doing that in contract law. But it might be that consumers want to know that what they're listening to is generated by machines and maybe that will change their mind about whether that is actually what they want to put into their ears. Or maybe they'll be like, oh, well, it's $2 a month less. That'll be fine. Yeah. I mean, at a concert that Beck did a little while ago, he played an AI generated Beck song. And apparently it was terrible. So I've got a little bit of time, hopefully, for that. But you talked a bit about democratization before. So I want to get a bit into that because it does sound like doors are closing in some ways, but are other doors opening? Yeah, that's actually a really good follow up from Spotify. Because or music rather, because today, I mean, always musicians are struggling, like a handful make it, make it. And then the vast majority don't. So this issue already exists in music. But I often like using the analogy when we talk about what's happening around AI and current AI tools and generative AI tools. I like the analogy of the drum machine, because the drum machine or even like electronic synthesizers in general, when they were introduced, many didn't even consider them valid music instruments. You know, it's not a violin, it's not a piano, et cetera. And then people making music with these electronic devices wasn't even considered decent, proper music. But what it did is it allowed a whole new generation of people who didn't have access to an orchestra to make music. I mean, hip hop exists because of turntables, because of the drum machine. And so and today what we call laptop musicians, they exist because of this new technology. Now drummers didn't go obsolete, drumming still exists. But it is true, and I know this from actually a family friend, that a lot of drummers did lose their jobs, because all of a sudden at weddings or at various stews, instead of hiring a three piece, a four piece, a five piece band, you could hire a smaller, cheaper band with a drum machine. So there is always this shift. But so when I talk about the democratization, I am referring to the fact that people who don't have access to an orchestra could make music. And from a very personal point of view, I can say, as a kid, I wanted to be a filmmaker. And growing up in Turkey, I didn't have any access to anything that could help me be a filmmaker. I didn't have access to people, didn't have access to equipment, et cetera. Luckily I had access to a computer, so I learned how to program. And that was my entry into making moving images that somehow tell the stories that I want to tell. So I'm very, very excited about the potential of kids in various parts of the world who don't have access to a lot of equipment. But if they have a computer with an internet connection, they might be able to tell stories that they would not otherwise be able to tell. And that's one of the reasons why I'm quite excited about the potential of these technologies as well. Not only because it's going to mean like at that really entry level, we're going to be able to see more people be able to make stuff, but we're going to be able to see professional creators take it much further and in different directions. And this is one of the reasons why I think it's a much more nuanced story about what the impacts are going to be for creative workers. Because it might be that it makes certain kinds of work more accessible and then results in increased commissions for the kinds of people that can make that. And so I think that there's huge potential, particularly for more personalized things, more local stories to be told potentially, like making a film based in a local community is still really expensive, even in 2023. But in 2025, is that going to be a lot easier? I think potentially for sure. And so I think as well as the sort of there is this darker side and there's the potential that we need to be watching out for about the dangers of this, getting a little bit excited about the potential too and thinking about the ways in which we can help provide the conditions for that to flourish, something I'm excited about. Yeah, it's definitely a lot of creatives are seeing it as a creative renaissance in a lot of ways because it means that a lot of the things that they didn't have in the past are now available to so many more people. And that visual language can actually be a lot more easily interpreted by many more people than it being something that you need five academic degrees to understand or something like that. So that democratising of a visual language kind of that is seen as very positive. And I think we underestimate the potential of it. I just wanted to add one thing about what you said at the very end there. I'm also an educator, I'm a professor at UCSD in the Visual Arts Department and I've been teaching an AI class to art students, undergrad and graduate for a few years. And two, three years ago, a prerequisite for my class was you had to know how to programme. I could only do my class with students who already knew Python and knew a little bit of technical things because even two years ago, you had to programme to be able to do anything visual with AI. And then we had Dali and Mid Journey and also something called Google, the Colab Notebooks became very popular and added UI, etc. Things changed in the last year and so I was able to give a new class to art students and music students and there's completely no technical prerequisites because we have the tools to do it. And as a result, the projects that they're doing are so much more diverse because the people who are able to come into my classroom and to work with AI are just, they come from completely different backgrounds to the much more limited set that I had before when I said, okay, you have to know how to programme to be able to do these tools. So I think that's very exciting. Yeah, it's definitely the user interface for these generative tools has really opened it up to so many more people like the non-coder, which there are so many of us out there. So yeah, it's definitely a really exciting renaissance in that way. What are some of the ways that you've seen AI used creatively in art? We've talked about the terms, but what are some of the ways you've seen it used? I went to New York earlier in the year for the first time and I saw one of the works by Refik Anadol where he uses different ways to generate moving images, like he pulls them from databases. Sometimes he uses it from EEGs and it's amazing, huge. If you've had the chance to see them, they're amazing. They're huge, large scale mesmerising ones that are constantly evolving and he's been working in AI for quite a long time. So that's one way he's been able to tie in something really human, like the way your brain works, your EEGs and make it visual in a way that people would never otherwise be able to see. I've seen, I do some work with the science gallery, so I've seen things like the ability to use being able to understand your emotion and then being reflected in an art piece so that it can identify whether you're happy, sad, disgusted, these kind of elements, which I saw being done with the smell of blood and that was just amazing to be able to bring that interaction in by understanding the human emotion. I've also seen lots of robotics being used in art, which is really popular, but sometimes it's hard to define whether that stuff's interesting or not, but there's a famous artist that uses Spot the robot, the Boston Dynamics robot, to actually paint and generate art as well. So it's kind of wild. People have some really interesting reactions to those robot dogs. Some people really cannot handle them and some people love them because they're like robot dogs, I mean, I know I said too, but show of hands who's seen the robot dogs in action? And who liked them? Okay. Who thought they were going to eat your face? I was just so creeped out by them. One project I just remembered, it's not really an artwork, but it's by Mario Klingerman, who's been working with AI for quite some time as well. I think it was in 2016, it's not even a recent piece, and he did it when he was an artist in residence at Google Arts and Culture, which is actually, there's a lot of problematic aspects there, but I'll bypass that for now. But what he did was he did this tool, and I think it still exists online, where you select two artworks from the database, Google Arts and Culture, and they've been archiving artworks of all kinds. So the project is called X Degrees of Separation. You pick two artworks, any artwork you like, one could be a sculpture, could be a piece of pottery, it could be a painting, and it plots a path of visual similarity from one to the other. And what that means is it looks like a morph, so let's say you pick a Roman marble statue, and you also pick a pot. And it does a kind of morph from one to the other, but it's not an actual morph, it just picks other artworks from its database that would be steps in this morph. And it's a really mind-blowing way of exploring this vast database of human creativity across millennia and seeing the similarities across continents. And yeah, it's really, really fascinating. Well, we talked a little bit earlier about the ethics of where AI sources images from, so I'd like to get a little bit deeper into that because, I mean, there's the controversy earlier in the year when the AppLenser was very popular, and then a lot of artists started to notice that their styles were being used to generate pictures through this. So again, I'm coming back to my question about how do we protect creators, but also how can artists be part of ethical AI usage in future as well? I think when it comes to copyright, first of all, copyright only protects expression, it doesn't protect idea. And we generally have accepted over the years that an artist's style is on the idea side of that spectrum. But that doesn't mean that in response to the fact that now, and so we were playing around with this this afternoon in my copyright class in the masters, and one of the prompts that we used was to like, first of all, okay, create an image of a law school, and then create an image of a law school in the style of Wes Anderson, but then create an image of a law school in the style of Kathy Wilcox, right? And I was really curious to see when somebody suggested that, how well like an Australian cartoonist would be represented in the training database, whether we'd get something that was like at all recognizable. And this was on mid journey, and we did, like it was so distinct. Like if I'd seen any of these images, I would have been like, that's a Kathy Wilcox. And so there are questions about whether we should find a way of protecting style. And I think the answer is possibly, but probably copyright is not the right way because copyright rights are usually fully alienable, and they get extracted very easily via contracts. And so just merely having the copyright is not very useful if you don't also have the power to hold on to that right. And we're seeing this play out in real time at the moment with voice actors working on computer games. They walk into the studio, they sit down, they pick up the microphone and they have to say, I'm Rebecca Giblin. I hereby assign all of my rights over this recording for you to use my voice, including in training a voice model. And so then they effectively, because they didn't have the power to resist agreeing to that transfer, because they want to get this work over the 5,000 other people who are in line to be a voice actor on a video game, they're not able to hold onto it. And so then they have to compete with a synthetic version of their own voice, right? So they're undercutting themselves. Next time they come in, it's just like, well, why should we pay you that much money? Because we can get a pretty good simulacrum of your voice already with this. And so we want to be maybe thinking about, well, these are personal rights and voice models as well. The ability of now with just a very small amount of training data for people to be able to say things in your voice, to put words in your voice, it's so intensely personal. And I think that that focuses attention on, invites us to think about, well, what is due to us as humans? And is the personal nature of what's going on, does that invite us to think about what's different between humans and machines? And indeed, just take it one tiny step further, what's different between humans and corporations? And do we need it to be treated differently? And so I'd say we need to think really carefully about these questions, but I would also avoid rushing into any kind of conclusion that copyright is the thing that will fix it. And I've heard some really, what I think is quite dangerous reasoning recently, and again, this was a conversation between Paris Marx and Molly Crabapple, where they sort of, by the end of the discussion about generative AI, they're just like, okay, well, copyright, we've been pretty skeptical about it, and it's really not great in a lot of ways, but maybe that's the best thing that we've got. And I think that's dangerous to just settle for this thing that has done a really poor job of getting artists paid, and has also achieved a lot of collateral damage in terms of the loss of culture through rights that are often overbroad and extend anybody's interest in them. And we should be thinking much more directly about what do we want to achieve, and how do we go about doing that and finding ways that are not the things that the frameworks that we invented hundreds of years ago when the printing press was invented to actually achieve it. There's actually Australia's AI ethics principles that we're one of the first countries in the world to have one in 2019. They were all put together, and they were around transparency, fairness, accountability, a range of different, and there's a lot of work being done. I personally work at the National AI Centre, and there's a lot of work being done in helping people to understand how to translate those principles into practices in the way that you deploy your AI in your organisation. And Lenza is actually a great example of one that didn't do anything for the whole inclusion, because I heard of a lot of people using their avatars, and the men would get these amazing pictures of astronauts and scientists, and the women would get these naked fairies or like, you know, other kind of examples. So it's really interesting how many AI products are out there at the moment that are not in line with these ethics principles, but I think there will be this drive around responsible AI that people will see that as not just something that they have to do, but also something that their brand and their values really, really open up to, because that way you're actually going to have an AI industry that people want to use as well. But we're actually also going to see the guardrails coming off, and it's like a lot of the big products that are out there at the moment, the commercial products, they've been really careful to avoid, like we already know that if you provide something without guardrails, humans will do terrible, terrible things. And like, for example, like with Microsoft's chatbot a couple years ago, turn it into a Nazi in five minutes. So if you do like in mid journey, and there's a lot of prompts, so I get, I was like, oh, I think that you might be violating the community guidelines. And it was like, I'm pretty sure I'm not, but they're being really careful. But there are some products coming out there, like there's one that I was playing around with, where it will undress any woman, right? So you take a closed picture of a woman, and using this neural net, it will take her clothes off and show you what apparently she looks like underneath. So again, these are really personal things that we haven't necessarily thought too much beyond the idea of the misnamed revenge porn around what to do with pictures of us. But the idea that fully closed pictures of us could be put to this kind of use is something we're going to be confronting more and more. So it's the drive to less ethical as people figure out how to take the guardrails off and these technologies escape into the wild. That's probably like more of that deepfake kind of scenario where, you know, we do have a lot of challenges around deepfakes with voice, deepfakes with faces. But I think already people are working on, you know, deepfake detectors and a lot of those. And there is a high barrier to create those kind of really good deepfakes. So it's not something that, you know, you can just download a free product and do. So there is a barrier to a lot of that kind of stuff. There is now, but I'm pretty sure that to create a very believable deepfake, yeah, right now you need a lot of skills. But yeah, within a few years, I'm sure it will be off the shelf. But in response to what you were saying, Elon Musk famously found chat GPT to be too woke. So he wants to build his own version that that's not work. But going back to your question of how do we protect the creators, I don't have an answer, but I would like to make the question more difficult by adding some more complications because already you've discussed it. I just wanted to add a few points, which is one of the maybe obvious potential solutions that's been proposed is this idea of consent for training data. Because right now the models are being trained on just stuff scraped from the internet without the original artist's consent. So like famously, one of the most recent data sets is called Lyon. It's what stable diffusion is trained on, for example. And it's initially it was five billion images, trade and scrap from the internet. And there was a huge out roll because, for example, it might contain lots of artwork by a famous, the canonical example is Greg Wachowski, a fantasy painter. And then you can say, okay, give me a dragon doing this that the other in the cell of Greg Wachowski. And it gives you something that looks like at least to an untrained eye, very much like Greg Wachowski. So then the first step was, okay, you have the option as an artist to opt out of the data sets. And this was obviously not enough because if you might like I actually have hundreds of images in that data set. I personally am not, you know, it doesn't threaten me. So I'm not bothered about it. But the argument was, okay, it shouldn't be opt out. It should be opt in. So only artists to agree to be in the data set should be in the data set. And this might seem like a good idea. But it really isn't because for two reasons, you don't need to be in the data sets to be able to be replicated. You can have Greg Wachowski removed from the data sets. But then I as an individual could take that model and give it one image from Greg Wachowski and say, create me an image in this style. So now the organization which made that data set, for example, Stability, is theoretically innocent because they didn't train on Greg's work. But I as this random anonymous internet person can still use that model to mimic Greg's work. So for me, the problem isn't what's going into the model. It's what's coming out. The other issue around all of this is, oh, I forgot what was going to go with that. Anyway, so this is one of the big issues. I had another point, but I can't remember it now. But I'll leave that there. I just wanted to say that controlling what goes in isn't the problem. It's what comes out. And what comes out is not just a problem of AI. It also happens without AI. I could imitate someone else's work, and there's not necessarily any protection against that because copyright does only protect expression. If I imitate exactly the work, then OK, that's a case for copyright. But if I imitate a style or an idea, then that's not protected. Should it be protected? On one hand, knowledge progresses when we share all of this. But I like the idea of being able to train AI. Oh, that was my second point. If we start prohibiting and enforcing opt-in consent, then this might have a very bad consequence in that right now, there are a lot of open source movements in building AI. If we say, OK, we need opt-in consent, and lots of artists opt out, this will allow big companies to hire artists and a handful of artists and just mass produce training data. And so then it will be even more isolated in the hands of these big companies who can afford to generate data to train on, and then all the open source or smaller initiatives will not be able to compete in producing more data. Whereas right now, the open source alternatives are, not surprisingly, I should say, inspiringly on similar levels of quality. So I just wanted to add those complications in there. Just giving everyone a heads up that we're going to open up to questions in about five minutes. So with questions, please keep it to one line that will end with a question mark at the end. And they'll come to you with a microphone. But that would be in about five minutes. So just keep in mind, if you've got anything, we should have time for a couple. I guess before we throw over to questions, these are big conversations that need to be had. Who should be in these conversations to continue developing the ethics, to continue putting in framework, to decide whether things should be opt-in or opt-out? What people need to be involved in those discussions? I think it's definitely a multidisciplinary sport. And I think even in... I have a lot of focus in the business side of things. So they're already talking about having a responsible AI champion as part of an organisation that ensures that what you're doing is actually aligned to our ethics principles. But I think artists, curators, gallerists, if we're thinking about the creative sector, probably needs to be involved in some of these ethical discussions. A few little stories. So I have also academically been involved in AI. I have a PhD in the topic and I used to go to AI conferences. And in 2016, I remember going to an AI, like the biggest academic AI conference called NeurIPS, Neural Information Processing Systems, in 2016. And it's a technical conference and it's huge, like over 10,000 people, academics, go. And there was one tiny little workshop in a small room about this big that was around the ethics. And hardly anyone really went to that. The following year, Kate Crawford, who is the founder of AI Now, which is one of the world's leading kind of socially responsible AI organisations, gave a keynote. And Australian. And Australian, yes, yes, thank you. She's Australian, really a wonderful person. She gave the opening keynote at this same conference. And a few years after, NeurIPS introduced a rule that any paper to be accepted to NeurIPS and it's the most prestigious conference needs to have an ethical considerations statement as part of the paper. And there was both a huge backlash against this from lots of AI researchers saying, you know, I'm a scientist. I don't think about the ethical considerations of what I do. Why should I be thinking about why face detection should be might be misused? But there was also a huge welcoming. And I remember chatting with a psychologist friend of mine about this. And she was shocked that people don't think about the ethical considerations of the work that they do, because obviously in psychology in the 60s, people didn't. But it's so ingrained now in psychology education that you do have to think about it. So what I'm trying to say is things are changing quite quickly. But I think it will take a generation for it to be fully embodied. I was once at a dinner with a senior Facebook AI researcher. And I said, any team that like a private dinner and I said any team that deploys a product that is going to be used by the masses needs to have as part of the core team, an ethicist, a sociologist and anthropologist in the same way that you would build the team saying, OK, we need two UX designers, we need a network engineer, we need a UI designer. You should put on that list as integral. We need an anthropologist who will study the consequences of this that the other. And this engineer got really angry and offended and said, what makes an ethicist have better ethics than me? And that kind of shocked me and made me realize what a bubble I was living in. But as long as I also want to add, Google famously had said, we don't need regulation. Regulation can't take care of this. We need to self-regulate so we will have an AI ethics board internally. And when those people run by Tim Ngebru did her job in highlighting ethical dangers of the work that Google was doing, they fired her. So self-regulation also isn't necessarily an option. Government regulation, I can't even see how that could work. So I don't know what the answer is. Again, I'm just adding complications. But I think it's definitely it is going to be about the same way that we have our ESG goals. We have sustainability goals. We have diversity and inclusion goals, all these kind of things. It will become part of that because otherwise, algorithms and algorithm decisions, they're going to be part of business. So they have to be treated like business. I agree with you. And I find that really bleak when I look at how poorly all of those things are performing at the moment and how quickly we are running out of time to fix this. And so I think that that's something that we really need to be conscious of now. These technologies are going to become endemic. The legal consequences of this are going to take probably well over a decade for us to get even like start to get a first layer on our head around whether it should have been allowed in the first place. But by then, it's the horses out of the barn. And so I think we need to be there's been a lot of work. No, no, no, I do understand that. But also I'm looking and of course, there's a wide spectrum of actors in the field with different motivations, different business models, different funding and so on. But what I am seeing is that there's so much capital being put into choke pointing these markets as well to get these technologies into the hands of a small number of powerful corporations and to extract ever more value for a short number of a small number of people while ignoring the enormous environmental consequences of these technologies. And like we're actually running out of compute power and for some uses at the moment because so much of it is going to these like as we're sort of saying, we don't even have a large language model in Australia. We don't have one. Yeah, I think SIROS trying to buy, you know, everyone's trying to figure out how to get one. But we don't actually have the hardware for one here. So there's only and we also don't have the legal framework that would permit us to create one here. Like legally, it would be way too risky. And I think that's what you were alluding to when you talked about Australia's lobbying have search engines. Exactly. So you should all know that we don't have any search engines running out of Australia because of the legal issues. Every single making a search engine means copying everything on the internet and copying everything on the internet is a copyright infringement unless you've got an exception that applies, which we don't. But I think I think the government, you know, I think people are trying to get ahead of that and trying to because, you know, we've had so many challenges around, you know, the use of AI, particularly with things like the robo debt issue that we have. And I really see businesses really wanting to get ahead of that and have this responsible AI strategy around what they're doing. And I know it's hard to believe, but there is a lot of work being happening in that space. I do know that they're trying, but then I'm also acknowledging those broader considerations, the fact that we like the hardware and we like the legal framework to even create the models here. So therefore, we're going to be by the time we get there, these large models that are entrenched in other jurisdictions that have less consideration of that are going to be what what's on offer. So I guess I'm what I'm saying is like really urging us that we need to be paying really close attention to this now because you know how, you know, 10 years after a new thing, like 10 years after smartphones came out, we looked back and we're just like, oh, maybe we're not really that delighted with the consequences of everything that we've got from that. And we think back and there were like a bunch of really obvious things that we might have done differently if we were thinking about it in the same way then as we are now. We're in that moment now. We're making those mistakes right now with generative AI. And in 10 years, we look back and we think back to this and I was like, oh, I wish I'd said we should have done this, this and this. But we don't know what we don't know yet. I just know that we are making mistakes right now. That's the happy period we're in. And that's an interesting point to see if anyone else wants to enter the conversation. We probably have time for one, maybe two questions. Is there all right, there's one down here. Wait for the microphone. Thank you very much, everyone. Memo, I'm really interested in how your art explores the connections with technology. Can you talk a little bit about that? Sure. Yeah. Thank you for the question. I started working with, I'll say, software in particular and writing code really, as I mentioned, as a means, as the only means I had to making that was available to me. And initially it started out as a tool. So the computer is a tool and I'm using it as a tool. And it's a medium, let's say. And it's a medium that I really enjoy. It's a medium that is dynamic. It's a medium that can be responsive, interactive. It's a medium that can scale to be very large and immersive. It's a medium that can be very small and intimate. I've made apps as artworks when the iPhone came out, for example. But I've increasingly become interested in this medium, not just as a medium, but also as a subject matter. So here, for example, I'm not just talking about AI. Well, I mean, obviously, you also are not talking about AI as just the tool, but the implications, the social, cultural, ethical, legal implications as a result of these technologies. So for me, there's always... It's very, very research driven. I love doing research and I do research both into how to use these technologies as a medium, but also the broader implications of these technologies like the legal, ethical, et cetera. And I started using AI. Well, I mean, AI, again, is a very, very broad term. Arguably, I've been using AI since the beginning. But I got into machine learning, let's say, probably about 14, 15 years ago, as a way to build systems that could understand what was happening in the world around it. So I wanted to build responsive environments, interactive systems that could sense people. It could somehow try to understand what they were doing, where they were going, what they were saying. And this is the job of AI. And I gradually started more and more of that. And in 2014, I realized, this is getting big. This is going to be big. I really want to know this really well. So I started a PhD in AI. And little did I know, it ended up... Yeah, it did end up being quite bigger than I thought, sooner than I thought. I should also add as a kind of side note, people always used to say, the people who should worry about AI for jobs are the laborers, the truck drivers, this, that, the other. Artists are safe. And I was thinking, no, no, no, no, no. Artists, but not like conceptual artists, but I would say creative labor workers, they're going to be the first to be replaced. Well, replaced is a bad word, sorry. To be like automation to come into it. Because there's no absolute truth. You can get something wrong and be okay. It's not like a medical diagnosis where you say, oh, this person doesn't have cancer. And it turns out they do. That's a mistake you can't afford to make. But with what we call art, you know, creative labor, there's no absolute truth. And also, you're not interacting with the physical world. Robotics is complicated, but purely virtual is quite easy. So I was expecting what's happening now to happen like 10 years ago. I wasn't expecting it to be as soon as it is happening. I wasn't expecting it to happen in 2023. That was a bit of a digression. But does that answer your question or I could go into more detail? Yeah. So the work that I have here combines a lot of what it combines the two hyper technologies of the time, actually, AI, but also blockchain and distributed computation. The idea of a blockchain is that instead of the way like Amazon would work is there's one Amazon servers and people connect to that server and Amazon owns that server and Amazon owns that data. The utopic vision of a smart contract based blockchain was that we distributed this and everybody owns it. It's actually a nice idea. It doesn't necessarily play out that way. So that exploded in 2020, NFTs, et cetera. So the work was a response to that. And I was using cephalopods who have a distributed nervous system. Their central brain is actually tiny. They've distributed that computation across their body. So I'm using cephalopods as a way of reflecting on that. So again, the technology is the subject matter here, but it's also the medium because I wrote custom software using AI. This was 2021, so before mid journey and all that. The images are generated with AI, generative AI. The text is generated with AI. The text is encoded in the image as an invisible watermark. So I released the images as NFTs initially, and then a month later I announced that everybody who bought an image, you actually bought a verse from a manifesto that was written with AI. So it's actually a book that's distributed on the blockchain. Did a lot of people buy it? Yeah. So there were 256 images and it sold out instantly. It's also worth saying it was really fascinating. I released eight a day on Twitter. I announced, it was all kind of pre-scripted. So every day eight critters were spawned, as I call them. They were spawned, sent out into the world. And then I didn't do anything to create a community. Usually in the NFT Web3 world, it's all about community discord. I didn't do any of that, but a community emerged and people were watching, it was all auctions. People were watching and live narrating the auctions on Twitter. Like, oh my God, so and so just bid, you know, 957 says, oh no, this, that, that, that. And I was just watching this, an absolute fascination how, you know, it's the pandemic, people really wanted community and they were forming community around this project, the NFT version of it. Yeah. So. Because we were talking earlier, I actually asked you, are you pro NFT or anti NFT? And you kind of went both. Did you want to explain that further? Yeah, I think it's possible to be both on many topics. With NFTs in particular, I was perhaps quite infamously against the blockchain Ethereum for its ecological footprint in using this algorithm called Proof of Work, which I won't go into now. So I was very anti that. With NFTs, so I did this on another blockchain called Tezos, which is thousands of times more environmentally friendly because it uses a completely different algorithm. And Ethereum switched to this now as well. But I was on the fence or well, not even on the fence. I, you know, I enjoy being able to dig into the both extreme ends of the discourse. On one hand, NFTs come from a very anarcho-capitalistic worldview, almost genocidal. Like if you read some of the early manifestos from the nineties of some of the people behind these technologies, the genocidal level of like, yeah, anyone who can't keep up with the technology deserves to die. And this includes, and then they list the kind of minorities that they think shouldn't be around. So it's that level in the early histories of these technologies. And arguably, you can see remnants of that in the space. But on the other hand, there's a very utopic vision of decentralization of equity, like the complete opposite. And I wanted to explore and put myself in there to see what I would see. I'm so sorry, but it's a very exciting bit of the conversation, but we have, that's time. So we're going to have to stop. I'm so sorry. But thank you all for coming. Thank you to Memo, to Rebecca, and to Rita, and thank you all for coming as well. And I'm sorry to cut it off at one of the most interesting bits, but we're actually already running over time. And thank you, Elizabeth. Thank you.