Does AI dream?

What sort of creative project could a human being and a robot work on together?

"What sort of creative project could a human being and a robot work on together?" Mark Riechers/Firefly (TTBOOK)

Listen nowDownload file
Embed player
Original Air Date: 
March 30, 2024

AI can do amazing things – write your term papers, sequence your genes, maybe replace your therapist. But even super-intelligence has limits. So, does AI really have a mind — or a soul? We'll explore the frontiers of artificial intelligence — from robots painting masterpieces to software engineers trying to create god-like machines.

Walter Scheirer

The internet is indeed overflowing with fake content, says computer scientist Walter Scheirer. But the vast majority of it seems aimed at the creation of connection—rather than destruction.

Meghan O’Gieblyn

Does AI have a fundamentally different kind of intelligence than the human mind? Essayist Meghan O’Gieblyn is fascinated by this question. Her investigation into machine intelligence became a very personal journey, which led her down the rabbit hole into questions about creativity and the nature of transcendence.


When painter Sougwen Chung paints something in collaboration with an AI she trained — say, a black oil-paint brush stroke — a robot mimics Chung. But at some point, the robot continues without Chung and paints something new. So how creative is AI?


Show Details 📻
Full Transcript 📄

- [Anne] It's "To the Best of our Knowledge". I'm Anne Strainchamps. Walter Scheirer was a teenage hacker, one of those kids who found his way to the shadow side of the internet and fell in love with its weird culture of fact and myth.

- [Walter] This is like I guess when I was in middle school, some friend of mine found some site on the early internet, these text files, creative writing produced by hackers. Some of it was technical, some of it was purely fictional and really interesting. It was like very underground, cheat codes, video games and things of that nature. But then it would dive deeper where it's like, have you ever considered how your operating systems work? Have you ever heard of this Unix operating system? This is what businesses in the government use. Right? Here's how to use it. Here's how to access it. So then you get into the creative writing aspect. It's like, by the way, I was on this Unix system that I accessed through the federal government of the United States, right? Seeing interesting things I was not supposed to see, UFOs, paranormal activity stuff that, that couldn't possibly be true. But it was intriguing enough where you wanted to keep reading. There's a very famous hacker group, The Cult of the Dead Cow that loved telling reporters that they had this-

- [Anne] I love the name.

- [Walter] Right? Like special way to move satellites in orbit. Not technically feasible, but it seemed plausible enough. The hacking world was not very large. And so it was not that difficult to find the quote, unquote "elite hackers." But of course they were like the elite. So I was kinda like the bottom feeder, the neophyte.

- [Anne] The kid brother.

- [Walter] Exactly. So hackers were really upset as they started to gain some notoriety with their portrayal in the press.

- [Newscaster] Quentin is a hacker, a computer genius who illegally breaks into computers for fun.

- [Walter] Hackers basically convinced Dateline NBC that they have recovered evidence of UFOs from government military facilities. So the episode features this hacker who appears in this anonymous fashion.

- [Newscaster] His voice is altered, his face hidden.

- [Walter] So you can't, you know, make out his appearance.

- [Newscaster] That's the only way Quentin would agree to talk to us.

- [Anne] So does he have one of those deep gravelly voices?

- [Walter] Yeah, exactly. Exactly.

- [Quentin] I have access, you name it, government installations, military installations.

- [Walter] He's at his computer and these text files are flashing by, you know, the catalog of UFO parts list, all this like crazy stuff. What the hackers end up doing is going through this culture building exercise on their end, hoping that people who are thinking critically about this new story realize like, wait a minute, why isn't this the emphasis of the episode, right?

- [Anne] Did you take part in any famous exploits or are there things that you did that you will not reveal on the air? I have to-

- [Walter] I wish. I wish, yeah. No comment, no comment.

- [Anne] Okay, so the world may never know exactly what Walter Scheirer did as a teenage hacker, but today he's a computer scientist at Notre Dame where he works on media forensics, meaning he detects fake things on the internet for a living. That's also the title of his new book, "A History of Fake Things on the Internet". So fear of deception and misinformation seems at an all time high right now. And for good reason, generative AI is ushering in a whole new world of utterly realistic fakes. I don't think we're ready for it, which is why I wanted to talk with Walter Scheirer. He is a realist about the digital future and what he sees isn't the end of truth, but the next iteration of the human imagination. Case in point, a couple of years ago, he sent his students to scour the web for examples of fake content, and they found plenty, just not what they expected.

- [Walter] What we started to notice is that, yeah, there's a ton of manipulation out there, but it's almost always in meme format. So images that are not secretly edited to fool you. Images that are obviously edited and they're usually humorous, right? There's some sort of joke embedded in the content, but some cases the meme is suggesting, right? Some alarming things. We have a large collection of anti-vaccine memes, some pretty humorous memes. Like one in particular I have in mind, it's some like babies and their eyes have been edited with Photoshop. So they're like huge and they're kind of scared, like, uhoh, I'm gonna get vaccinated. But then if you keep going down the rabbit hole, right, the messaging gets more and more insidious. You start to sort of question, right, If you get deep into this, maybe there is something to these jokes, right? Maybe I should sort of scrutinize this matter more. And that's a tricky thing. When you think about humor on the internet or humor just in general. In some cases, parody and satire are really important. Other cases, you have to be a bit more critical. When you're looking at what the message is,

- [Anne] It's incredibly complicated to draw the line. What is harmless and what is dangerous and deceptive? Have you arrived at any clarity around where does that line lie?

- [Walter] Yeah, that's a great question. So one thing I, I came to appreciate more actually when writing this book is that parody satire are really important. This is a very old sort of format of making a social critique, and it's often used very strategically. So in the book, I point out a really famous case, which predates the internet. Jonathan Swift's famous pamphlet, "A Modest Proposal". It's about cannibalism, is about eating babies. It's really aggressive, right? It's really disturbing. But Swift isn't really talking about cannibalism. What he's trying to do is make a social critique about the state of the poor in Ireland. But over the years, this pamphlet has been routinely misunderstood, even up to the present day. People lose their mind. It's like, how in the world could we, we have people out there advocating for the consumption of babies. Sort of missing the point. And a lot of the internet is just like that. There's all this transgressive material. You have to think a bit, right, to get the message. And if you're not, that's where it gets dangerous. And of course, the feed like nature of social media causes us to kind of just stick to the surface level message.

- [Anne] Let's talk about deep fakes. When people talk about being really afraid of fake things on the internet and about a tsunami of fake content, is that overblown?

- [Walter] Yeah, so this is a question that's been coming up recently. So deep fakes now have been around a little while. They first appeared on the internet in 2017. Huge concern right away that you would have videos appear in a political context that changed the course of an election or lead to political violence, really bad sort of doomsday like scenarios. But none of that has transpired, as far as anybody can tell. DeepFakes have had really no impact on politics whatsoever. Where they do really concern me, where I am really sort of wary of them, when they target individuals. For instance, there's been a rise in deepfake pornography. The tool is being used now basically to humiliate women. And that is completely unacceptable.

- [Anne] Yeah, so revenge porn you're talking about where somebody photoshops the girlfriend who dumped them, or, oh, wasn't there a recent case of a journalist in India? She wrote something critical of the government.

- [Walter] Yeah, exactly. She, yeah, she was a, a prominent politician and she was targeted with this deep fake pornography to basically discredit her.

- [Anne] Which I think she has said, kind of destroyed her life.

- [Walter] Exactly. Exactly. Another case too. It's been coming up, there's this really perverse genre of pornography. It's like fantasy porn. I wanna see this actress in this pornographic scene. So you use a deep fake to construct this. This is a huge problem. I feel like this is like a human rights issue, fundamentally. You're seeing children do this now, right? There've been several cases in like middle schools, high schools, where this has come up. But what worries me, a lot of the conversation is around those sensational things. And we often lose sight of the real problems that are affecting real people. The deep fakes in the pornography realm haven't received the same treatment in the news that all the political stuff has. And I think that's a real problem.

- [Anne] I guess the other thing that people seem to worry about an awful lot is that because of the global reach of the internet, the overall impact of deep fakes is to create a culture in which there's just a breakdown of trust and the feeling that how can we count on anything to be real and that that's ultimately is gonna wind up sabotaging democracy.

- [Walter] Yeah, so I think this is a pretty interesting insight that relates to the design of the current internet. So a lot of my time recently has been thinking about alternatives to this. If you look back to the early days of computer networking, it wasn't like this. You would have more localized computer networks. You had these things called bulletin board systems, right? So somebody in a local community would put their computer on the phone network and people would dial into it. Usually again, it was like a special topic board. So people interested in something like knitting, right? There would be like a knitting bulletin board. Like whatever your favorite hobby or interest was, there was something out there. And so you didn't have these global services where with a click of a button, I'm reaching basically everybody who is subscribing to the service, who's online. It has sort of worked a lot better. There's a scholar of media theory at the University of Virginia, Kevin Driscoll, who wrote a fantastic book called "The Modem World". And he makes this case that maybe decentralizing things on the internet such that amateurs have some control over the infrastructure like we did back in the '80s and '90s would be really good and healthy. And I really agree with that. I think this is a fantastic idea. I also think is not that expensive to do this. I think it's just we gotta get people to sort of unplug from these giant social media services and go back to this older model. And again, many people are familiar with this 'cause they've been online as long as I have. You know, since like the early 1990s.

- [Anne] The way you wrote about the days of the hackers, there was definitely a feeling of, you know, the internet wants to be free. And I felt like that's where you were coming from. And then, which I couldn't quite put together with knowing that here you are teaching at Notre Dame, and I know that Catholic social thought has informed the way you think about the internet. And somehow I was having a hard time putting these two things together. Former hacker, Catholic social thought. Can you explain?

- [Walter] Yeah, so I think behind this whole book project is really the idea of community and how do communities form like the hackers I just mentioned, right? This, this is a really interesting subculture. You have a bunch of people who are connecting for the first time and building something that endures. Hackers go on to create the computer security industry. Like they do extraordinary things. Some of them go into politics like Beto O'Rourke famously was a member of the Cult of the Dead Cow. How do you build a community like that? It seems to me like the internet has huge potential for bringing people together. And of course, if you turn to Catholic social teaching, it's really about how do we sort of flourish in a communal sense. How do you build some notion of the common good? And if you turn back the clock and look at the ideas that formed the construction of the internet we have today, a lot of that comes from Marshall McCluen, the fame media theorist. He's very much associated with the counterculture in the 1960s. One fact that a lot of people don't know, he was a devout Catholic.

- [Anne] I did not know that.

- [Walter] He converted to Catholicism and he believed that the Catholic faith was the ultimate media system because you're always in communion with the saints, right?

- [Anne] Ha.

- [Walter] People who have passed. And of course with God, all of these things come together, right, through prayer, meditation, these forms of spirituality. And it's interesting to see those ideas then sort of trickled down into his thinking about the media. He was obsessed with this idea of uniting the entire planet through information networks.

- [Anne] There's also a little element almost of transubstantiation, isn't there? Didn't he write about in the end we will become information?

- [Walter] Yeah, yeah, yeah, yeah, yeah. That's exactly right. Technology's kinda mysterious when you think about it. I don't think that's entirely a crazy idea. A lot of people are talking about emerging technologies in this way too. Think about AI right? Is there some spiritual dimension to it? And again, I think there's something to that. And of course all these things when we think about technology are, you know, human creation and in Christianity, right? We're called to co-create with God. He gave us this facility to create. And so that I think is sort of like a very powerful message behind the scenes with a lot of this stuff.

- [Anne] Say more about AI and the spiritual component. That's nothing I have imagined. Mostly all you hear about is AI is going to eat humans.

- [Walter] Yeah, exactly, we have these like doomsday notions about AI. Now, again, as a practicing computer scientist, I have a more realistic, grounded view on, on AI and its limits. It's not gonna destroy the world anytime soon. I'll reassure all of the listeners out there that we have more pressing issues here on planet Earth to deal with. That said, like then, then you have another community that's like, you know, we're gonna create this super intelligence and then worship it. That's not a great idea. And also, again, sort of, you know, is the super intelligence even possible? Is this where the technology's heading? Again, from the realistic vantage point of a computer scientist, no. Then there's this really interesting third perspective, which again comes back to this idea of technology and creation, right? It's like, where does AI come from? Well, it comes from all of us. The most powerful AI systems chatGPT, MidJourney, Dall-e. These things are trained on the data that we generate and then ship to the internet. It's sort of like a reflection of all of us, the human community. And that's actually reassuring, right? I mean, yes, it's gonna have some flaws because humanity is flawed, but like, this is kind of neat. We've all had input into these systems.

- [Anne] Well, there's this concept that you bring up that I find really fascinating, the myth cycle. So this was a concept the great anthropologist, Claude Levy Strauss came up with this idea that we live in kind of two different realms. The the real world, the realm of truth, and this myth cycle, which is, well, what?

- [Walter] Yeah, so Levy Strauss had a really keen insight in that the imagination is really useful for human survival, interaction with others, problem solving. And it's frequently discounted once you get in the sort of like 19th, 20th centuries. But he is arguing that people are always thinking beyond their immediate circumstances. Why do they do that? Why do they tell these stories in general? If you think about it, if you're like a perfectly rational person, you wanna optimize every aspect of your life, why would you waste time telling stories? Why would you waste time making things up? That's not an efficiency. But Levy Strauss is arguing. But if you're sort of thinking beyond your immediate circumstances, you can do way more than if you're constrained with just this factual knowledge in the observable world you're in. Does that make sense? Right. It's almost like a shocking thing to say in like the 21st century. Wait, I can just daydream and that's gonna help me?

- [Anne] Well, it begins to make a little more sense of our drive to create this virtual world because it can be tempting to say, oh my God, why are we wasting our time trying to simulate everything that already exists? Build a crappy copy online. But it seems to me that your point with Levy Strauss is no, my God, online is where this enormous human drive to create myth lives. And rather than the enchantment of the world having disappeared, maybe it's kinda sneaking its way back in, in this wild inchoate frontier that we've created. And all these little memes actually may be more significant than we think.

- [Walter] Yeah, absolutely.

- [Anne] And getting this about right?

- [Walter] You put it better than I did in the book. That's an excellent summary. Yeah. I mean, I, I firmly believe that. Again, it's not terribly surprising when you look at culture through the centuries. So much of culture is filtered through some sort of myth cycle. And memes are just again, sort of like a rapid fire moving the myths around, which I think is really neat. There's such a, a strong human desire to do this. We're further developing really innovative technologies to tell stories. Right? Surprise, surprise, as time goes on. I think that's like largely misunderstood. It's like, what is the internet for? But again, I think a lot of people would still say, it's the information super highway. You go there to get facts, to get your work done. And that's what it's for, right? You still hear this corporate messaging from the dot com era of the '90s. But it was never really meant to be that, right? It was really McCluhan's vision of this creative space where again, we're gonna share projections of our imaginations. Levy Strauss would sort of be smiling about all this. You know, this is the natural progression of the myth cycle.

- [Anne] That's Walter Scheirer, a computer scientist at Notre Dame, an author of "A History of Fake Things on the Internet". Coming up, writer Meghan O'Giblin goes looking for the soul in the machine. I'm Ann Strainchamps. It's "To the Best of our Knowledge" from Wisconsin Public Radio and PRX. Some of the most interesting questions about artificial intelligence are existential, not in the sense of will AI destroy the world, but more like what can it teach us about ourselves, about the deep nature of our own human minds? What's the difference between machine and human intelligence? Well, Meghan O'Giblin is one of the best writers I know on this subject. In her book, "God, Human, Animal, Machine", thinking about the nature of machine intelligence leads her to questions about her own creativity, about the unconscious and our human longing for transcendence.

- [Steve] Before we get started here, just for kind of scene setting, is there, is there anything to mention

- [Megan] Scene setting? I know I racked my brain.

- [Anne] And because Megan lives in Madison, Wisconsin, just a few miles from our studio, Steve Paulson stopped by her house to get a sense of how she's thinking about AI now.

- [Megan] I did find my old notebooks that I did automatic writing in. It's not really. I don't know how.

- Okay, that's cool actually. Huh. That's right. Megan has a whole stash of notebooks filled with her automatic writing, the kind of stream of consciousness prose the surrealists turned out a hundred years ago when they were trying to tap directly into their unconscious. And one more thing, this all came out in a series of sessions under hypnosis.

- [Megan] The hypnotist was very insistent that I write fast.

- [Steve] Wow. So I was in his office lying down on a recliner and you know, he led me through this whole visualization exercise. I had to stare at the ceiling, keeping my eyes open without blinking for a certain amount of time. There was also some like bells and gongs involved. And then he said, okay, you can pick up your computer now and start typing. He said, the only rule is that you can't stop typing.

- [Steve] And were you actually in a hypnotic state at that point?

- I was in a weird state. I don't know if I was under full hypnosis. I did this several times with him, and I'm still not really sure I'm fully hypnotizable. But then I also did these little exercises. So like I would, I did, and this is something that the surrealist used to do too. Like when you first wake up in the morning, your brain is still loose. Very associative in that sort of dream state. Just grab a notebook and start writing without thinking. And so this was the notebook I used to do that. And I can barely even make out my writing 'cause it was so, I was trying to do it very quickly, yeah.

- Would you be willing to read anything from your notebook? I'll let you choose if there's anything there.

- [Megan] So I don't know if you've read any of the surrealist texts. So it's weird because a lot of what I wrote sounds very similar. It's just really free association. So this one I think is mostly legible. I could try to read some of it.

- [Steve] Sure. Yeah.

- [Megan] Okay. And all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils on the side of the road glaring with their faces undone. And all those trips back and forth when the sun was so high and naked in the sky, we thought that it might drown us. Maybe there's salvation in the soft touch, the lone sound, the metals, and the trophies of our former age. But when it comes to that, we will need the thunder and the solitary guidance of some greater force.

- [Steve] Oh wow. So why did you wanna do this? Why did you go to a hypnotist and why did you wanna try automatic writing?

- [Megan] Well, so I was going through a period of writer's block, which I had never really experienced before. It was, was during the pandemic and, and I was working on a book about technology and AI. GPT-3 was just released to researchers. And I was reading this algorithmic output, synthetic text that was just so wildly creative and poetic. These models could basically do a sonnet in the style of Shakespeare or you know, write just very, very sort of dreamlike, surreal text, short stories, poems.

- [Steve] So you wanted to see if you could do this yourself, not using an AI model but yourself.

- [Megan] Well, I became really curious about this idea of what does it mean to produce language without consciousness? And for me, as somebody who, at this point in my life, I was really overthinking everything in the writing process and my own critical faculty was getting in the way of my creativity, it seemed really appealing to think about what would it be like to just write without overthinking everything? I think I just got really curious about the unconscious, and especially its role in creativity. Like a lot of writers, I've often felt when I'm writing that I'm in contact with something larger than my conscious mind, that I'm being led somewhere by the piece, or that the piece that I'm writing is somehow more intelligent than I am. And you hear artists talk about this all the time. They feel like they're not really creating so much as they're uncovering something that-

- [Steve] They've sort of become a conduit to some larger consciousness maybe.

- [Megan] I think ultimately what I was looking for was some sort of external meaning or guidance or some sort of, I don't know. I wanted some sort of guidance on, on questions in my life.

- [Steve] But let, let me ask you about that because there's some things that are really interesting about what you're saying, given that you're sort of searching for this meaning outside yourself because you have a rather unusual background for someone who's known mainly as a writer about technology and AI. Not only do you come out of a fiction writing background, which I don't think you do anymore, but I could be wrong. You also grew up in a very religious family. You grew up as a Christian fundamentalist, right?

- [Megan] Yeah. Yeah. My parents were evangelical Christians and actually everybody I knew growing up believed what we did. Basically, my whole extended family are born again Christians. I was homeschooled along with all my siblings growing up. So most of our social life revolved around church. Yeah. When I was 18, I went to Moody Bible Institute in Chicago to study theology and was planning to go into full-time ministry.

- [Steve] So there definitely was meaning out there in the world for you, I assume that whole time-

- [Megan] Absolutely.

- [Steve] I mean, I thougth the whole point was everything was infused with meaning and God.

- [Megan] Yes. Our lives had a definite purpose. Our purpose as human beings on this earth was very clear.

- [Steve] But you left the faith, right?

- [Megan] Yeah, I had a faith crisis when I was two years into Bible school. I started to, I mean, I had been having doubts for a while about the validity of, of the Bible and the Christian God. I dropped outta Bible school after two years and pretty much left the faith. I think I began identifying as agnostic almost right away.

- [Steve] Is that how you identify now?

- [Megan] Yeah. Confused, I guess. I dunno. Agnostic is probably the best term. Yeah.

- [Steve] One thing that's so fascinating is, so you lost sort of that very hardcore Christian faith, but my sense is you're still extremely interested in questions of transcendence, the spiritual life. That stuff matters to you.

- [Megan] Yeah, absolutely. Yeah. I think anyone who grew up in that world doesn't ever, and a lot of people do leave that world, and I don't think anyone ever totally leaves it behind. And my interest in technology, I think grew out of a lot of those larger questions about, yeah, what does it mean to be human? What does it mean to have a soul? You know, all these things that were very certain when I was growing up.

- [Steve] A few years after she left Bible school, Megan read Ray Kurtzwile's "The Age of Spiritual Machines", the book that gave transhumanism a kind of cultural buzz. It was a utopian vision of the future where we would download our consciousness into machines and evolve into a new species.

- [Megan] And it was this incredible vision of transcendence. This idea that we were going to enlarge our intelligence, our physical capacities. We were going to essentially become immortal and be able to live forever.

- [Steve] So there's some similarities to your Christian upbringing.

- [Megan] Yeah, as somebody who was just at the age of what, 25 starting to accept that I wasn't going to live forever in heaven, that I wasn't going to have this glorious existence after death. It was incredibly appealing to think that maybe science and technology could bring about a similar transformation.

- [Steve] Megan threw herself into this transhumanist world. But once again, she eventually grew skeptical of this utopian vision.

- [Megan] But it did lead me to a larger interest in technology. And I, I did, I think through reading a lot of those scenarios, particularly mind uploading, started thinking about what does it mean to be a self or to be a thinking mind. But there was this question that was always alighted, which is, well, is there going to be some sort of first person experience there? Right? You know, nobody had a good answer for that because nobody knows what consciousness is. And that to me was really the fundamental problem. And what got me really interested in AI because I mean, that's the area in which we're playing out that question.

- [Steve] Isn't the assumption that AI has no consciousness, has no first person experience? Isn't that the fundamental difference between artificial intelligence and the human mind?

- [Megan] It's definitely the consensus, but how can you prove it? And now that we have chatbots that, I think just actually this week Anthropic released a new chatbot that's claiming to be sentient, that it's conscious and it has feelings and emotions and thought.

- [Steve] They just say that. But why should we believe that if the chatbot says that?

- It's different, we don't know how it's different, really. We really dunno what's happening inside these models 'cause they're black box models, they're neural networks that have many, many hidden layers. So the relationships that they're developing between words, between concepts, the patterns that they're latching onto are completely opaque, even to the people designing them. So it is, it's a kind of alchemy.

- [Steve] So let's get a little more concrete here and talk about the kind of AI models that we hear a lot about, like chatGPT, these extremely sophisticated, large language models that seem really intelligent. But I mean, the way these models work, correct me if I'm wrong on this, is it's just algorithms about language. You're sorting through these massive databases and you're constructing words that make a lot of sense. But is that all it is? Is it just algorithmic wordplay?

- [Steve] Yeah, Emily Bender and some other engineers at Google came up with the term stochastic parrots. Stochastic is statistical relying on probabilities and a certain amount of randomness, and then parrots, because they're mimicking human speech, they're able to essentially just predict what the next word is going to be in a certain context. That to me, feels very different than how humans use language, which usually involves intent. We typically use language when we're trying to create meaning with other people. It's sort of an inner subjective process.

- [Steve] So in that interpretation, the human mind, the thinking mind is fundamentally different than AI, correct?

- [Megan] I think it is. I mean, there's people, I mean, Sam Altman, the CEO of OpenAI famously tweeted, I'm a stochastic parrot and so are you. You know, so there's, there's people who are very, you know, the people who are creating this technology who believe that there's really no difference between how these models are using language and how humans use language.

- [Steve] And if you really take that idea seriously, that there's no fundamental difference between the human mind and artificial intelligence, or if AI will generate some entirely new kind of intelligence, well, who knows what's ahead for us? Do you think that an AI so advanced would seem to have God-like capacities? Coming back to our question about transcendence and the future possibilities of machines we'll become so sophisticated, we almost can't distinguish between that and more conventional religious ideas of, of God.

- [Megan] I mean, that's certainly the goal for a lot of people developing the technology. You know.

- [Steve] Really?

- [Megan] Oh yeah. All the Sam Altman, Elon Musk, they've all sort of absorbed the Kurzweil idea of the singularity. They are trying to create a God, essentially, that's what AGI, artificial general intelligence is. It's essentially AI that can surpass human intelligence.

- [Steve] Which is, surpassing intelligence is different than God. I think. Maybe it's not, I don't know.

- [Megan] I mean, the thinking is that once it gets to a level of human intelligence, it can start modifying and improving itself. And at that point it becomes a recursive process where there is going to be some sort of intelligence explosion. This is the belief. Yeah, I think that's another thing that is a question of what are we trying to design? You know, if you want to create a tool that helps people solve cancer or come up with solutions to climate change, you can that with a very narrowly trained AI. But the fact that we are working right now toward artificial general intelligence, that's something different that's, that's creating something that is going to, yeah, essentially be like a God.

- [Steve] Why do you think Elon Musk and Sam Altman want to create that?

- [Megan] I think they read a lot of sci-fi as kids. They got, they got, I mean, I don't know. I don't know. Obviously there's economic incentives and profit motives and all of that, but I do feel like it's something deeper. I do feel like people are trying to look for or create some sort of system that is going to give answers that are difficult to come by through ordinary human thought.

- [Steve] Do you think that's an illusion? If it's smart enough, if it reaches this singularity that it can kind of solve the problems that we imperfect humans cannot?

- [Megan] I don't think so, because I mean, I think it's similar to what I was looking for in dream analysis or automatic writing, which is this source of meaning that doesn't involve thought or a source of meaning that's external to my experience. And life is infinitely complex and every situation is different. And that requires this constant process of meaning making, thinking. You know, Hannah Rent talks about us thinking and then thinking again, you're constantly making it on making thought as you experience the world. And machines are, you know, rigid, they're trained on the whole corpus of our human history, right? And so they're reflecting back to us. They're like a mirror. They're reflecting back to us a lot of our own beliefs. But I don't think that they can give us that sense of authority or meaning that we're looking for as humans. I think that's something that we ultimately have to create for ourselves.

- [Anne] That's Megan O'Giblin, talking with Steve Paulson at her home in Madison, Wisconsin. Her most recent book is called "God, Human, Animal, Machine" By the way, you might like to know the music in this segment is the kind of thing we might be hearing more of in the future. A live improvisation between a human pianist, David Dolan, and an AI system that can listen and respond musically in real time. The AI system was designed and programmed by Composer ODed. Ben Tao, recorded last August at Kingston University in London. Coming up, how one painter is making art with AI. I'm Anne Strainchamps, it's "To the Best of our Knowledge" from Wisconsin Public Radio, and PRX AI can do a lot of things. So Charles Monroe Cain wanted to know if it could help us produce this radio show.

- [Steve] We're gonna produce a show. You're part of a whole hour program, which we're asking the question, does AI dream?

- [Anne] Gosh. So I'm like, Hey, before I get started, what if I asked chatGPT, who I should have on the show and have them, chatGPT, write the questions. I put in the question, who's the most important person who would be on this show? And of course, eight seconds later, how quick it is. It was you.

- [Megan] Really?

- [Steve] Congratulations, I guess. Why do you think chatGPT chose you in a show called Does AI Dream?

- [Guest] Well, I'm flattered. Why the system think that I would be first in line for that question. Well, I think I've been working with the space of human and machine collaboration for almost 10 years now.

- [AI Voice] Generation one Doug can move, see and follow. in forthcoming generations, Doug will be able to remember, recall, and reflect. When that happens, I have no idea what he'll draw like, but I'm pretty curious to find out. Thanks. Thanks.

- [Guest] Maybe that's part of why I've sort of been populated up in the zeitgeist.

- [Steve] I, I think one of the reasons it shows you, and it's another word you used, was you have empathy. Empathy for AI. And I wonder if that's why it shows you knowing that you, maybe you're trying to actually understand what it dreams.

- [Guest] Yeah, that's cool.

- [Anne] What does it mean to have empathy for artificial intelligence? For Sougwen Chung, it means treating robots and AI systems as collaborators. Less like tools or super smart paint brushes and more like fellow artists. Chung is a former researcher at MIT's media lab, and she's a well-known artist who trains robots to paint with her using AI. So if you're watching, you see her make a brush stroke with black oil paint and you see the robot mimic her. But then, and here's the thing, at some point, Chung stops painting and the robot continues and it makes something new, big, abstract, flowing lines, organic shapes. It's beautiful and it really is a co-creation because the whole time Chung is wearing an EEG headset, she's using her brainwaves to communicate the robots often while she's in deep meditation and in front of audiences. So Charles wondered just how close that connection feels.

- [Sougwen] No, I like it. I think there's something about the work that really opens, it's meant to open up dialogue for people to participate and think about what's actually going on. What are the dynamics at play? I've been building robotic systems driven by a variety of techniques for a while, and with each generation I learned something new. I don't think I like to stay fixed in one particular medium or technique.

- [Charles] Yeah. So you're pushing the boundaries. And I gotta ask you why with art, I love art and appreciate art, but-

- [Sougwen] As do I.

- [Charles] Yeah, yes you do. But maybe you could have explored it as a professor of robotics, do you have to do art? Is that the way to explore this?

- Oh, I definitely don't think there's, there's not like a hierarchy of approaches. It's how I think. I think there's something about art that asks questions and doesn't try to find easy answers for things. When I was a researcher at the media lab and started diving into the space of building machines and building my own data sets, I found that a lot of the conditions of being a technologist and an engineer solely are about executing towards a single function. And that was less interesting to me than trying to break things and sort of, and use a system in a way that it shouldn't be used to see what I can learn as opposed to building these, you know, perfect features. I like the error states and I find a lot of value in them. And I think in general, that kind of creative expression, living with the system and the work has felt very real to me. And I think that's important.

- [Charles] No, no, that makes sense. You can do things with machines, with AI that humans can't do alone, which must be very exciting for you as an artist. Do you have any story that comes to your mind where like, this is something I did with a machine, there's no way I could have done this as an alone human?

- [Sougwen] Oh yeah, absolutely. So I think a few years ago I built a multi robotic system connected to the flow of New York City. You know, I don't have the sensory apparatus to be able to see so many different positions and views of an urban landscape. That's just impossible. And, and also extracting the movement data from it. We use an optical flow algorithm to extract data points to power this robotic swarm, if you would. And that's fundamentally something beyond my physical, visual and embodied capabilities. It was really exciting and new.

- [Charles] What did the end product look like?

- [Sougwen] It was a three meter by three meter painting that I performed at Manna Contemporary in New Jersey. By the end of it myself and the robots were covered in paint on a large canvas. So it looked like probably the strangest landscape painting you've ever seen. But it was a way to view those layered ideas in a kind of chaotic way, I guess.

- [Charles] When I went to, I didn't know who you were until chatGPT told me about you and I went to the art thinking I was gonna see this modern-

- [Sougwen] It should get a commission.

- [Charles] I should, yeah I should. I thought it was gonna be modern. I had all these ideas of what it was gonna be. None of those ideas held up when I got to the art. It's beautiful.

- [Sougwen] Thank you.

- [Charles] You have like a, maybe a headset on and these arms are moving and you're moving with it. The end product of these things isn't this postmodern chaotic thing. It, it is. Is that a goal? Is beauty a goal?

- [Sougwen] Yeah. No, that's, that's interesting. I think, I think maybe in a way beauty is the goal, I think more so than beauty, I like this idea of escaping my own frame of mind, almost like being in a, a state of flux and a flow state that puts my own conscious mind out of the equation, I guess, because I do many of these paintings as a performance, it's a lot about trying to navigate that state of attention and presence from other people on the canvas. How that happens is I try to create movements and a balance with the machine system that grounds me and calms me down. And I think if that output resonates with people in the way that they describe it as beautiful, that's, I think that's really powerful. It's not, it's not something I'm trying to do at all, but I think it might be a product of my, my presence with the system, potentially.

- [Charles] Well, I had mentioned earlier that chatGPT, picked you as a guest. I say, okay, chat GPT, could you write some questions? Here's the first question that it asked you. You embrace imperfection. Would you be disappointed if you and the machine made something perfect?

- [Sougwen] You know, I think our idea of perfection is really linked to control, and I personally am not that interested in control. I don't think that's where we find, that's not where I find moments of inspiration. One would say like, if the result of the work is quote, unquote "perfect", then I think it's predictable. I think it's expected, it means we had an idea in our head of what it was, and then it resulted on the canvas. And that's not, as a painter, something that I get out of bed for. If I already have it in my head, then, then it kind of exists already. What I like about the, the imperfection of the process is there's real tension, there's real way finding involved.

- [Charles] You know, there's this idea for this show we're very interested in, and that is hallucinations. The idea that when a machine or AI specifically does something that's unexpected and doesn't repeat it again, that it's called a hallucination.

- [Sougwen] Yeah.

- [Charles] What do you think of those moments? They seem important to me.

- [Sougwen] Yeah, I think what's really interesting about where we are with these systems is this idea of machine translation and synthesis. When you work with large data sets, like the one that drives chatGPT or Dall-e or MidJourney or any of these systems, this idea that it's hallucinating is really powerful. I think it comes from the human anthropomorphize things, which is really exciting. I'm not sure if, I wonder if that's unique to our species, the dream of seeing ourselves in other things, whether that's like our pets or our microwaves, or our machines or our cars. I think that mirroring and that echo is really, really fascinating. I would argue that what these systems allow us to do is sort of hallucinate together in an interesting way. Bringing new images and new ideas out into the world to create a more vivid imagination about what things could be is kind of exciting. So if that's the outcome of a hallucination, then, then I'm here for it.

- [Charles] You know, for a lot of us, if we wanna find awe and wonder, we go to nature. I wonder if you've experienced awe and wonder with the machine.

- Oh, I will confidently say the, the practice is kind of an ongoing exploration of awe. From the first moment I worked with Doug one in 2015, there was something different happening on the page. In the moment of being there, what the relational dynamic of it requires is this commitment to the experience of awe and concentration and mark making that I don't think you can replicate with anything else. At least in my years of developing the work, it's always a very singular, addictive space in a way, maybe, but there's a thread of awe that runs through the whole practice.

- [Charles] I had asked ChatGPT, who should be on the show and chatGPT gave me three people. You were one of them. Another one that really surprised me was Julianne Kaminsky. She's a psychologist, studies the communication and social cognition of dogs. And then I'm like-

- [Sougwen] So cool.

- [Charles] Yeah, yeah, let's talk about dogs. If there's consciousness of the machine, we have to understand how we interact with dogs. And I'm like .

- [Sougwen] I love that idea because part of what I've really come to with the work is this idea of decentering the human.

- [Charles] Yep.

- [Sougwen] Right? Decentering us. We're always the main character, but we're not. We've seen what happens in the world when human beings regard themselves as the main character. Right? Well, I don't need to talk about climate change on this podcast. I don't think we have enough time. But I think there's something about opening up our view to other ways of thinking, species or otherwise that reframes our position in the world and in our lives and to be more relational in that engagement. And I think that's, that's really cool and important.

- [Charles] I wanna ask another chatGPT question. It asked, do you believe that machines could one day become autonomous creators? Or do you believe that they will always require some level of human input or collaboration?

- [Sougwen] I think that, given that these systems and machines are built by human beings, as far as I know, I think that becomes always an extension and a creative expression of human intent. So I think in some ways it's always some additional apparatus for human creative expression and they will always be inextricably intertwined in a way.

- [Charles] Is there a moment coming with you or in the future where you come downstairs to the studio and the machine is making art on its own?

- [Sougwen] Would it be funny if I said that I unplug all the robots before I go to bed? Well, it'd be funny if I did.

- [Charles] Most people, and I mean in the world, but certainly in America, they're afraid of AI. I mean, they're really afraid of it. From Blade Runner to books, to how we're being communicated about it, how Congress talks about it, why are we afraid? What do you think we're afraid of?

- [Sougwen] If we, okay, so if I think we think about machines as human extension in a lot of ways, or manifestations of human intent or extensions of ourselves, there are some dark aspects of humanity that can be extended through machine apparatuses. I think that drives a lot of how we construct this idea of the AI. But at the same time, there could be just as well machines of care and machines of stewardship and machines that steward nature that we don't see as much because we haven't built them yet.

- [Anne] Sougwen Chung is an artist and researcher. She's a former MIT Media Lab fellow and Google artist in residence, and she was recently named one of Time Magazine's 100 most influential people in AI. You can see a video of her collaboration with AI robots on our website, at "To The Best of Our Knowledge" is produced in Madison, Wisconsin by Shannon Henry Kleiber, Charles Monroe-Kane Mark Riechers and Angelo Bautista. Our technical director and sound designer is Joe Hardtke with help from Sarah Hopefl. Additional Music this week comes from Mystery Mammal, Bio Unit, Pan Eye, the Lovely Moon, Young Paint, David Dolan and ODed Bental. Our executive producer is Steve Paulson and I'm Anne Strainchamps. Thanks for listening.

- [Electronic Voice] PRX.

Last modified: 
April 01, 2024