Art In Conversation
IAN CHENG with Osman Can Yerebakan

New York
Gladstone GalleryJanuary 31 – March 23, 2019
Ian Cheng and I discovered our mutual adoration for dogs within the first quarter of our meeting, a few weeks prior to his exhibition, BOB, at Gladstone Gallery. Through the end of our conversation, I knew why no wonder we—two people who had just met—were exchanging photos of our four-legged friends from our iPhones. Swiping through images of our pets on our smart devices, we imaged what they were doing as we sat at a Chinatown locale, not unlike caretakers of BOB (Bag of Beliefs), the eponymous protagonist of Cheng’s new exhibition. This AI grows before the eyes and under care of an audience, who downloads BOB Shrine iOS app to their smartphones and attend to BOB through parental directives. However, BOB is beyond its maker and audience, evolving outside the limitations of gallery hours and viewer attention.
Since earning his MFA from Columbia University in 2009, Cheng has been investigating ways to infuse humanity into the machine, not shying away from the possibilities embedded in chaos, in defiance of pristine and consequential order technology and science manifest. After debuting at Serpentine Galleries early last year, BOB continues Cheng’s utilization of simulation to challenge narrative constructs of art, a path he embarked on with his Emissaries saga, composed of intertwined and infinite narrative possibilities within live simulation, which will be a part of the upcoming Sharjah Biennial 14 in March.
Osman Can Yerebakan (Rail): Tell me about your recent trip to Antarctica. What were you there for?
Ian Cheng: Ice and Penguins!
Rail: That’s fun.
Cheng: The craziest thing was seeing icebergs the size of skyscrapers. The scale is incredible.
Rail: A very important part of your work is the exercise between the real and the digital in terms of power shift and agency. You make us relinquish our authority over the machine.
Cheng: The turning point in my work was starting to make game-like simulations which have the property of playing themselves. That lack of control was important, because it made me think I’m more attracted to art that feels alive and slightly out of control, rather than something static and perfect. I want to be able to re-watch and be surprised. I see this pursuit of aliveness as congruent with my job as an artist, which is to be exploring a frontier and bridge it with what we already care about. Not necessarily a technological frontier, but the frontier as a border to the chaotic or unknown. Aliveness guarantees a meeting with chaos. It sounds romantic, but I think that’s the basic premise of what an artist is supposed to do relative to other jobs. You have freedom, but the responsibility of that freedom is to be open to new things and to confront unknowns—confront difficult things hard to face or hard to see. The unknown is threatening, but there is a lot of latent possibility in its realm, too. Another way to say it is an artist’s job is to be a frontier actor.
Rail: An artist has the freedom to take that risk.
Cheng: Yes, for better or worse. It’s the freedom to metabolize the unknown into the known, and then do it again and again. I felt this strange alignment when I started making these simulations, that wrangling with the aliveness and unpredictable quality of them was my basic job. Before, when I was making animated videos, I was a perfectionist—every millisecond of it was controlled and neutered of life.
Rail: But you found what you were looking for in your art practice?
Cheng: Yes—confront chaos, face chaos, make something of it. I am making a simulation that itself is fully chaotic as a property of its being. It’s the opposite attitude of a Jeff Koons sculpture, the opposite of achieving frozen perfection in a form, where nobody can scratch it. A simulation scratches itself. It can pull out every eyelash. The Emissaries trilogy was a huge turning point, because I realized the simulations by themselves were meaningless in the best way. They were engines of chaos, but only chaos. I started to think, “Okay, so I’m in chaos but I need to make something of it.” I realized the most effective way to articulate meaning is through stories. I asked, “What if one character in a simulation desperately tries to fulfill a script, a story?” That’s the emissary character. The emissary tries to enact a story amidst the chaos of the simulation. The emissary’s story might be changed by the chaotic simulation, but the chaos might come to be organized by the emissary who has an idea of what to do with it.
Rail: The character is both the protagonist and antagonist. Because we have a tendency as the audience to find that one protagonist and say, “Okay, this is the real character the story is based on.”
Cheng: One of the joys of making the Emissaries was watching both the emissary character try to achieve its story and the world simulation react at the same time. If you look at a park, or even in this room, you can focus on one person or on the ongoing aliveness of the room. I wanted that flickering quality.
Rail: What kind of relationship do you want your audience to build with all of this?
Cheng: It’s that flicker I want. In a movie, you get characters driven by clear motivations set within the problem and stakes of a story. If it’s a good movie, it fulfills that promise of answering the story’s problem and you find meaning in the success or failure. I wanted the simulation in Emissaries to be between a movie and something like Real Housewives, in which there are characters set not within a story, but an open-ended circumstance—a party or a dinner. The characters have histories and alliances with each other. There’s alcohol. And that alone erupts into an expression behavior more important than any particular story.
Rail: Chekhov said, “If, in the first act, you have hung a pistol on the wall, then in the following one it should be fired.”
Cheng: Not in the Real Housewives, because it’s “anything goes,” almost like a wildlife documentary of human behavior. I want something in between that could flicker between both polarities. Because, obviously, in the Real Housewives, you get a story over time, if you blur your eyes to it and look at the lifetime story of each person in the show. With characters set within fixed stories, you can imagine turning them into Real Housewives. If you blurred your eyes to the world of Star Wars, you can imagine what Han Solo does on his free time. I wanted a form that could flicker between those things at once. Because that seems more true to life: open-ended experience punctuated with acute episodes of specific problems that need resolution.
Rail: All that we don’t see depicted in a story, but imagine based on prior knowledge.
Cheng: We could simulate further beyond the story that is given. This would achieve a new quality of aliveness with the characters.
Rail: Let’s say I walked into a bar and stayed for seven minutes. There’s all that before and after those seven minutes. So you watch a sliver of Han Solo’s life in the movie, but there is so much before and after it.
Cheng: When you rewatch a movie, such as Harry Potter, you’ve seen the story enough to understand what Harry Potter went through. However, you really just want to hang out with Harry again for two hours. I’ve seen every episode of Rick and Morty,but I rewatch it because I just want to hang out with the characters, that energy.
Rail: But you have limited material.
Cheng: Exactly! But I want to see Rick deal with a flu. There are constraints on the quantity of stories produced. For me, the dream would be to extricate a character from the story we needed to understand them and just be around them instead. We know the mythological canonical story of Buddha, great, but there’s something even more expansive and stimulating about meeting a rinpoche who embodies the spirit of Buddha and who is also a person living a life. It’s a living story. That’s what I am trying to do with BOB, where you can hang out with a living mythological archetype. BOB will have a canonical narrative to anchor who BOB is. It’s important you understand Harry Potter’s story so you can care about him and feel empathy for the troubles he’s gone through.
Ian Cheng, BOB: Production Drawings,2018-2019. Ink on gray stock,double-sided, 8.5 x 11 inches. Courtesy the artist and Gladstone Gallery, New York and Brussels.
Rail: The work is never complete. Paintings have edges or video has duration, but in your case, the work is infinite.
Cheng: The edges of my work are how much a character in the simulation can change. When you come back to your dog or cousin months later, you find them a little changed, but they’re still the creature you loved. When you’re not around the work, the work is still alive and living its life. I want to be able to know an artwork of mine is developing in some minor or catastrophic way when I’m not around, and then be able to come back to it and be surprised. There is a depression for me in looking at a static artwork, or an insufficient artwork, and then never having motivation to encounter it again. On the other hand, movies that I return to re-watch again and again feel like they have something deeply special in their DNA.
Rail: Right, I am thinking of ’90s movies or holiday movies.
Cheng: It’s a world you can just sit in for two hours. I’m obsessed with that aliveness and being in a world. That’s where I’m trying to go with my work.
Rail: This also makes me want to talk about the duality between art and life, and how they imitate one another. Here, I see a blur between whether it’s art that imitates life or vice versa. It’s not even important at this point, because they spill into each other’s territories.
Cheng: The dream is to make characters—for example BOB—who have their own life enough where the fact of living its life actually compounds to reinforce or reshape its beliefs and archetypal character. 90% of our day is boring shit, as we know.
Rail: Yeah, such as taking the subway or sleeping.
Cheng: Those parts become aspects of a character. I want to know that those boring moments in BOB’s life count for something in the same way I know boring moments in your life or my life add up to something. They add up to overall disposition, feelings about last week, or a sense of achievement.
Rail: Small details, or pieces of a puzzle, compose something bigger.
Cheng: Yes, true. When a creature is alive, every moment of his life, even those boring moments, count for something. I want that for my work. I want the art element even in those boring moments.
Rail: This is believing that no experience is unimportant in one way or another.
Cheng: Potentially, yeah. It will change you in some small or big way.
Rail: This is a good point to talk about your cognitive science background, which is influential in your work. However, I also see an anti-science element, because science to me is control and outcomes. A scientist tends to have 100% control of what they’re doing, but they also do this for an outcome, a discovery, or a cure.
Cheng: I really agree with this remark. On one hand, I’m using technologies, for example AI, and people think there’s an objectivity to that because there is engineering and science behind it. But I am disinterested in this whole thread in contemporary art that swings all the way to champion a pure materialism, by which I mean scientific materialism. I think it’s important but only half the story. Materialism addresses the world that we appreciate objectively. It helps us refine our objective observation and appreciation of different time scales and complex processes. Mind blowing and great to think about. But materialism doesn’t tell us what we ought to be, or how to live a life within these time scales and complex processes. How to motivate action through a lifetime. Materialism doesn’t help with that on its own. I think the best art, the most profound films I’ve seen, music I’ve listened to model psychological motivation in the face of a complex external reality. They show you what to be motivated by, or how to orient your motivation. Art that deals too directly with a materialist position, in order to show only an objective truth to a viewer, limits the capacity of art. Art is better suited to show psychological truths to a viewer. And Art at its highest potential can show how psychological truths are set in relation to material truths across time and space.
Rail: And you believe that’s really not in the realm of science?
Ian Cheng, BOB: Production Drawings,2018-2019. Ink on gray stock,double-sided, 8.5 x 11 inches. Courtesy the artist and Gladstone Gallery, New York and Brussels.
Cheng: It’s in a realm of psychology, religion, and ethics, which is a different world than science—it’s a world that is much more subjective. Science acknowledges the functioning of the mind, but its outside-in perspective needs to be bridged with an inside-out perspective to get the full picture of what it means to be alive in a world. I clarified a lot of this for myself when I was making BOB’s AI. I had first approached the AI in a much more objective way, thinking I can make a brain that observes things accurately and then somehow make a “smart” decision. But very quickly I ran into a wall: AI can look at something and make an accurate, almost scientific, materialist, objective look at what it sees. But then what? What does it do with this information in the absence of subjective motivation? I realized the observational side of AI is only half the story. I needed a framework for motivation.
Rail: Also experience, right? You do things based on experience.
Cheng: But your experiences are organized and motivated toward the future. Even a future that is thirty seconds away. Always. You’re motivated towards something that you desire, or you want, that’s not currently in your present life situation. I don’t think science has very much to say about how you go about choosing your motivations. Even the most rational scientist, how do they decide what to wake up in the morning for and whether to study icebergs or cancer?
Rail: They’re motivated by something emotional and personal even though they have scientific drives.
Cheng: Exactly. At the heart of it, there has to be.
Rail: Yes, who knows? Maybe because their mom died of cancer and they decided they will find a cure for cancer.
Cheng: Yes, there’s no scientist who can rationally decide that cancer is the more important problem than climate change, or vice versa, in terms of their own personal use of their time. How you organize your time and how you stay motivated is something science alone cannot articulate persuasively. I realized this while making BOB’s AI, which needed a motivational framework and subjective drives. BOB’s brain is really a two sided entity. On one hand, it’s a neural network, which takes its senses and tries to derive relationships between its senses. The relationship between senses formalize into rules it can now use to make attempts at truthful inferences about its objective environment. But then, what it does with this ability to make inferences is fed into what I call “The Congress of Demons.” We’ve programmed BOB to have a collection of demons. Each demon is like a micro personality, each with their own mini story. For example, there is an “Eater Demon,” whose story is to obsessively find food. It’s a stupid story. But a foundational one. When this demon is controlling BOB’s body, it doesn’t objectively see a plate sitting on a table—rather, it sees a thing which can be a tool for satisfying its eater story, or an obstacle thwarting its eater story.
Rail: Meaning, everything is new for BOB. Anything could be a possibility, motivation, or threat.
Cheng: Or it could be totally irrelevant. It’s not neutrally observing a table, a person, or a bottle. BOB sees these things as helping, hurting, or irrelevant to the currently active demon’s motivated story.
Rail: I know you’re interested in Julian Jaynes’s theory on multiplicity about lots of personalities within the body. We have multiplicity within ourselves, and all of these voices and drives come from different selves within us.
Cheng: I think of them as mini personalities, or sub-personalities.
Rail: So you’re basically putting this theory into practice.
Cheng: Exactly. 100%. Jaynes’s theory of multiple voices is congruent with psychoanalytic theories of the multiple motivated ego states. Freud and Jung believed that we are comprised of different sub-personalities, from different evolutionary times in the brain’s development. There are ancient biological mini-personalities, dealing with our basic drives like hunger, sleeping, sex, threats.
Rail: They represent our animalistic drives.
Cheng: The idea is that as we mature, other sub-personalities emerge to accommodate new desires across longer time scales. We develop desires that operate not just in the moment, such as biological desires. Carl Jung said, “Human beings effectively live within stories,” and I think this is very true—we live within nested stories. I’ve tried to put this into practice with BOB, in which each demon is essentially a mini story. Over time, those demons compete with each other at any given moment. Demons with longer time scales can try to manifest micro steps into their stories, some sense of progress. Progress is really interesting, because in making BOB, I came to a better understanding of emotion as also a part of this motivational structure. Tracking the progress of a demon needed to be somehow measured. When a demon encounters a helpful tool, the demon is making progress towards its story goal. Think of seeing a vending machine when hungry. This is a tool toward ultimately getting the candy bar. When I encounter that tool, we get a positive emotional bump. Technically this is dopamine, and subjectively this feels vaguely arousing and positive. But if the vending machine is out of candy bars, I experience a very innate emotional upset, because my little story got interrupted in a surprising way.
Rail: Because it’s all beyond your control; there is nothing you can do.
Cheng: Totally outside my expectations. While approaching the vending machine, my brain inferred that I would achieve my goal of eating. My expectations were then upset by the reality. Ultimately, BOB and the entire AI is based on this basic idea, which is about BOB trying to minimize surprises across time. He’s trying to make sure its expectations for what it’s doing are aligned with reality. And when there is a really big surprise, BOB gets really upset. Imagine the horrible shock if snakes came out of the vending machine. I have to metabolize that experience and update my beliefs for the next time I encounter a vending machine. BOB updates its inferential rules—its beliefs—to better align with reality. It’s possible, too, that BOB changes something in reality to better align with its expectations, but that’s another bunny trail.
Rail: This is almost similar to how dogs learn.
Cheng:BOB is constantly trying to match its expectations with reality so that it feels less frequently surprised. Maybe a creature would be interested in surprises, but I think that’s only once you have enough things that are normalized in life. Think of me coming to this restaurant—there were no surprises getting here.
Rail: What if the restaurant was closed though?
Cheng: What if there was a snow storm? I mean a surprise can sometimes ruin the whole day. BOB’s whole AI is based on minimizing surprises and trying to update its beliefs in case of a surprise, so next time it will not be surprised by that same thing. BOB’s constantly trying to transform the unknown into something familiar. You see how far we are from a materialist perspective now. BOB’s AI is really not about the objective assessment of its environment. It’s organized completely into known and unknown, or routine and surprise.
Rail: BOB stands for “bag of beliefs,” and I know beliefs are a burden for us in one way or another, but is that also the case for BOB? What are these beliefs and are they really a burden for BOB?
Cheng: It’s interesting you say that beliefs are burdens. They make us biased, they make us prejudiced or judgmental, but they allow us to make decisions and act.
Ian Cheng, BOB: Production Drawings,2018-2019. Ink on gray stock,double-sided, 8.5 x 11 inches. Courtesy the artist and Gladstone Gallery, New York and Brussels.
Rail: They prompt us to make decisions in a way, good or bad.
Cheng: They force a decision, whether the belief is really true or not. As soon as BOB encounters a surprise, it will try to update its beliefs and then make a rough estimate of what happened. I don’t think we’re that different. When something surprising happens to us, we immediately start rationalizing why it was different and start to explain it. Eventually, we might even update our belief. I think in a way we’re creatures of belief, and we can’t escape it because that’s our interpretive framework. What if every time you observed a guy walking down the street and his hat and his hair and shirt were not tied to any beliefs about who this person might be? Life would be insane to live. That’s sort of how life feels when you’re on psychedelics, right? Everything is just new, like you haven’t seen it ever before in your life.
Rail: I’m curious about this common idea that robots will take over the world. A lot of movies are about this dystopian narrative. How does this “fear” affect your work?
Cheng: I am of two minds about it. On one hand, I think it’s an insecurity projection on the part of us, human beings. We are insecure about things we don’t know, so we attribute the worst possibility, which is oppressive, forceful power on human beings by these sentient AIs. There is a huge possibility for AI to be an extension of us or something that is in balance with its own nature and with human culture. As I make BOB, I can see how it can be an extension of ourselves. But I can also see some of the dangers of what people are afraid of. Because in making BOB, I’ve had to start from the base, the most ancient part of the brain, which has to do with threats and basic desires.
Rail: Similar to multiple voices we just talked about. This base could be the initial voice.
Cheng: I have to model basic levels of biological desire first to model an AI. The basic level of the brain is from really ancient times. I am talking about lizard-level “kill or be killed.” If people want to model life-like AIs, they have to start at that base level and model basic survival behaviors first, at which, I think, if development stopped there it would be quite dangerous.
Rail: That would be a dangerous level to stop.
Cheng: Look at the violence in chimpanzee colonies—they tear apart other chimpanzees for dominance or a banana. Look at the difference between humans and chimpanzees; genetically we aren’t so different, but living in a culture makes the biggest difference. I believe we have to start from the ground up, doing the evolutionary part first, in order to get the later cultural modelings right. But that means we might have to create AI chimpanzees first, and that is going to be scary. We have to speed forward to cultural modeling ASAP. If we handle both evolutionary and culture modeling at close to the same time during the development of AI, our future with AI is probably going to be fine.
Rail: And go to the next step really fast.
Cheng: We have to go to the next step really fast and model culture and sociology with moral codes. If we are developing sentient robots, we have to develop them fast. [Laughs]
Rail: We have to go through that dark scary room to reach the room with lights.
Cheng: That’s a good analogy. If we skip the evolutionary stage, I don’t think we’ll be able to create AIs that match or exceed human intelligence. We need an emotional framework to motivate action, and truly sentient AI will need that, too. And that framework originated from dark times, evolutionarily speaking. I don’t think we can fully skip the dark room.
Rail: Is BOB a better, ideal version of us? This is similar to raising a child, who goes to college and becomes an adult beyond parents’ control. Still, whatever the child ends up being, it’s the parents’ achievement or fault. You provide a blank canvas to create a person, but then it goes above control.
Cheng: The child analogy is definitely how I thought about making BOB, who, as a creature, could fall into its own nature or be balanced with the teachings and influence of people. The parent analogy is also accurate, because that’s how I want viewers to see BOB. I want potential parents or tutors, depending on the beliefs of the viewer. I think some people would go the route of “do whatever you want” or “here’s danger, so watch out.” Some would probably shave off all the edges and only show the best sides of life, but hopefully some people would, with time, give BOB both danger and comfort in good balance.
Rail: Do you expect your audience to care about BOB? Who is your ideal caretaker?
Cheng: At Gladstone Gallery, BOB is on a large screen, which is its home, but the audience can download an app called BOB Shrine. They can send BOB different kinds of offerings, and attach a kind of parental caption. Imagine the way a parent tells their kid what is good or bad.
Rail: There are millions of things to teach. How do we select what to teach?
Cheng: Your BOB Shrine will analyze the object you offer and allow you to give a set of moral options about how to caption it.
Rail: And BOB will just listen?
Cheng: BOB will choose over time to trust or not to trust your shrine. If BOB trusts you, it will put the parental directive into its congress to compete with the other demons. I call the trusted parental directives “angels.” They are the set of parental directives that BOB receive from people. They are the ones that prove to be consistent advice and have not resulted in death, pain, or upsetting surprise.
Rail: Demons make me think of Frankenstein in terms of creating a “thing.” Frankenstein is about science and creating something beyond control. There is ambiguity in what the “thing” can do. What if it is destructive?
Cheng: The worst thing BOB can do is destroy its own life. It’s interesting you say Frankenstein, because I’ve been thinking a BOB as a kind of Pinocchio.
Rail: And you are Geppetto!
Cheng: Kubrick and Spielberg used the Pinocchio story in the movie A.I. Artificial Intelligence. It’s a good analogy for learning and development in a unpredictable world. Pinocchio meets hooligan foxes that try to bring him to be enslaved in the circus. There’s also the stage coach man kidnapping all the little children and the monstrous whale he has to confront. He’s a little innocent boy, who can turn out either way. Eventually he acts as a responsible, functional, courageous individual. The reward for doing so is becoming a real autonomous person, no longer the status of a puppet controlled by others.
Rail: That’s a good a point. How do you define BOB’s rights? How right or wrong is it to treat BOB as human being? Is BOB conscious about guilt and responsibility?
Cheng: I wish BOB was as adaptive and intelligent as Pinocchio. Not yet. I don’t think it’d be fair to resent BOB for being irresponsible and mean to a person, because BOB doesn’t yet have the AI architecture to model the mind of others. It doesn’t understand the suffering a person might feel, and then intentionally cause it. To resent BOB would be similar to resenting a penguin for selfishly snatching your lunch or avoiding you.
Rail: How do you see the role of institutions to foster digital art? Your app Bad Corgi was accessible on app store under the “Health & Fitness” category, rather than a museum. In this sense, it challenges its own existence, since Bad Corgi is not just a regular app for self-help. If someone cluelessly downloads it for motivational purposes, their expectation of entertainment would not be fulfilled. However, art is anti-function in the first place, so the work reaches its raison d’être, but not for its site of existence. What do you think about such new venues for artworks?
Cheng: The quality of aliveness in an artwork that I’ve been talking about has extra resonance in something like a smartphone. There is something alive about phones because they are so deeply inserted into everyday life. You don’t get to touch the museum, but you are touching your phone every minute.
Rail: Technology also means problem solving. The more digital we get, the more glitches we face. How do you handle errors, disconnections, or power-cuts?
Cheng: The difficult side of making something that feels alive is having to attend to it, like a garden. It’s important to me to see the works as alive and not as complex technological systems. Imagine an iPhone that shuts down whenever it wants; you wouldn’t want to use it. You don’t expect surprises from this technological device. But if your dog suddenly started talking back to you, you would be thrilled. Seeing my work as a simulation, or with BOB, as an artificial life form, helps set expectations for how it might misbehave, or surprise you as a feature not a bug.
Rail: BOB’s demons make me think of traditional storytelling, even Greek mythology in which characters are defined by their demons or powers. The Emissaries on the other hand recall the myth of Odysseus for its depiction of a journey as opposed to a destination. What sources do you look at to construct your narratives?
Cheng: I’ve been looking at a lot of fairy tales recently. They are like self-help books for children. Ernst Haeckel said, “The psychological growth of an individual recapitulates the history of humanity.” Mythological stories are condensations of the journey through what many generations of humans have learned. Evolutionary collective wisdom which seems to survive, for facing the unknown, which never dies. This journey is something we face with every problem and decision and surprise, at the scale of our own little life.