Big Board Conversations #4 Intelligence Pt. 2
For one week Charlie Mylie and Jori Sackin had The Big Board at Missouri Bank asking people to make predictions. They greeted the customers as they entered, prompted them to make predictions about the future and then walked them through plotting it on the board. The conversation that follows was sparked by one we had that week. This is the fourth in a series of 5 that will be published on Ten Millimeters, and the second part on Intelligence. See the first part here.
J: Hey, you're back.
P: So a funny thing happened to me after we had our discussion. I work at the coffee shop and this homeless guy I'd never seen before came in talking to himself and holding this cardboard sign which, before I could say anything, he handed to me. I assumed he was asking for money, but the sign was actually a decimal to fraction conversion chart that he'd scribbled in sharpie. The handwriting was sloppy and some of the numbers looked a little different, but overall the information was good. I didn't know what to do so I just handed it back to him and he left, but it got me thinking. That sign was intelligence, in the sense that it was accurate information, but it existed in a man that didn't seem to operate well in the world.
J: Yeah, that's fascinating. Accurate information can exist in human beings but for whatever reason they may have trouble utilizing it in ways that effectively accomplish their goals. What do you think he was trying to accomplish by giving you his sign?
P: Who knows. Maybe he wanders around all day showing it to people hoping they can do something with it.
J: Yeah. I'm just imagining now.....what if someone knew one true thing...such as a mathematical conversion chart, but had no idea as to how it should be used and so they go around blindly trying it out in every single situation they encounter hoping that one application will prove effective. Essentially what they lack is the ability to discern what information is applicable to a given problem set and so they're left to take random stabs in the dark.
P: Did you ever see the movie Rain Man?
J: Yeah. It's been a while though. Remind me.
P: Well, Dustin Hoffman plays this savant autistic man who gets taken to Vegas by his brother since he's really good at counting cards. He can keep 7 decks in his head at once but has a really hard time navigating basic human interactions. It just popped in my head because here's a person that's incredibly successful in one task, blackjack, but has a really hard time solving the "lunch problem" that we were talking about earlier.
J: Yeah. So the interesting question is, "What's so special about blackjack?" Well, it's a closed system. There's no possibility for new information to emerge. The rules never change. The cards never change. There is only a rearranging of what already exists. No possibility for emergence or transcendence. That's a very comforting environment for someone who has a highly systemized way of dealing with the world, but of course, most of the world is not that way. It's fluid. Being in a relationship with another human being is a problem set that involves a constant flow of new information that needs to be reinterpreted and evaluated and so our conception of how things are will necessarily be shaken and that requires a discerning flexible adjustment. Savant autistics can store massive amounts of accurate information but because their system is so over reliant on information being static, it's terrifying when they come into contact with variables that aren't easily systematized and so it's no wonder human relationships are hard for them.
P: But isn't all information going through this fluid process of constant reinterpretation?
J: No. 2 + 2 = 4. That's never going to change. I mean, I suppose the rules of Blackjack could change or we could alter a deck of cards somehow so that information could change, but generally, that's going to be a pretty stable system. But of course most information isn't this way. Most information is time and context dependent. It shifts and changes. So for example when the CIA says, "We have some good intelligence on a jihadist plot", we all know what that means. They've collected accurate information that's going to help them plan for the future so they can achieve a desired outcome. If they take this intelligence and use it to navigate the fluid changing nature of reality and achieve their desired result then they've succeeded.
But think about the next day after the plot is foiled. Is the CIA report still “intelligence”? The attack was stopped. Everyone was arrested. What's the value of the information now? How about in a hundred years? How should someone value and categorize this information in the face of a reality that is shifting in ways that may dramatically affect its relevance? I really don't know how to answer that question, but it's one that must be answered if we're going to create artificial systems that are going to tackle these problems in a more effective way then we are currently doing right now.
P: One thing that bothers me as we're talking is this idea of "success" you keep bringing up. I feel like this is a pretty vague statement. First, who gets to decide whether something is successful or not? Second, I feel like this makes failure especially harsh. And third, I feel like what's implied is, "People who fail are stupid" and that really bothers me because I don't think that's true.
J: So let me start with the last one and then we can work our way back, ok?
J: Are people who fail unintelligent? Well, people fail for a lot of reasons right? A lack of intelligence must be one of them. Think about the classic movie scene of a guy about to defuse a bomb. There's a red and black wire. Which one should he cut? If he knew that the black wire was the correct answer then he would be successful. If he doesn't have that information, he may fail. Either way that's "intelligence" as accurate information. CIA intelligence. Like we were talking about earlier. But you could also imagine a person who doesn't have access to that information but nevertheless solves the problem by placing the bomb in a car, putting a brick on the gas pedal and having the car drive away from everyone before it blows up. That's intelligence as "successfully being in the world". It's a creative solution to the problem at hand. So in order to talk about what is and isn't successful, we first have to define the scope of the problem. In the example I just gave the scope could be "Don't die" or more altruistically, "Don't let anyone get hurt". Depending on the scope, you evaluate whether you succeeded or failed.
P: But originally you said that successfully being in the world was about survival. You said that driverless cars weren't intelligent because they couldn't survive on their own.
J: Right so the first scope of success I offered was Darwinian, which means, that at an absolute minimum success is surviving and reproducing in the world. This definition is handy because there's not much interpretive wiggle room. Either your survive or you don't. Either you reproduce or you don't. But it's a pretty limiting frame to view things and so of course "success" can't be contained to just that domain.
So we can also be successful at any of the other massive amounts of problem sets we've created and can interact with. We set a goal, navigate the world in relation to that goal and succeed or fail in achieving it. For instance I went and got coffee this morning. I performed thousands of operations in order to make that goal a reality. I walked down the stairs, opened the door, got in my car, drove around and didn't run into things, spoke my order correctly to the barista, paid the correct amount of money, and sat down without spilling it all over the place. By my own evaluation, I succeed.
P: So you're saying it's up to the individual to determine whether they succeed or not?
J: Yeah...I mean...there are obviously cases of self delusion where someone thinks they are succeeding and their idea of success clashes greatly with the majority of the people around them. Serial killers would be an extreme example I suppose. There is a tension between how success gets attributed that I think is important to recognize, such as social forces being somewhat deterministic in how we frame reality and interpret "success" but generally, I would put most of the responsibility on the individual. For instance, I believe I succeeded in getting coffee this morning and I'm not sure how someone could argue with me that I wasn't successful or say that my understanding of my success is somehow culturally relative. I wanted coffee and I got it.
P: Right, but you can imagine someone also succeeding because they were lucky. They didn't know the correct answer, nor where they particularly good at creatively responding to their environment, and so like a lot of those guys who are trying to defuse a bomb, they just blindly guess. This would be success but not because of any intelligence and the only difference between the two is some internal process we're blind to. So how do we tell the difference between success that's lucky and success that's a demonstration of intelligence?
J: Right, so...luck. I've been thinking about this lately. Off the top of my head I would define it as the swirling causal forces that are out of our control that shape our environment and affect us in ways that are somehow determined to be undeserved. I say "undeserved" because most people have some kind of theory of karmic justice. For instance, let's say a person who has sacrificed their lives helping others wins the lottery. People who believe in karmic justice wouldn't see this as lucky. They would see this as retribution for previous actions. Similarly if I was a serial killer and my house caught fire and I died of smoke inhalation some people may not see this as unlucky because it happened in relation to the invisible chain of moral accounting that was rightfully repaid.
You can see this view most clearly in some forms of Buddhism where they lay out a system of perfect moral accountability, where all suffering and joy are perfectly relational to actions taken in the world. In such a system there is no such thing as luck since everything is deserved. Once you start introducing the tiniest bit of luck though, once you start believing that terrible things happen to people that don't deserve it, then you start getting "lucky" and "unlucky" people. I haven't thought about this for very long, but I'm tempted to think of luck simply as "things that happen to people that are considered undeserved". What do you think about that?
P: I don't know why but as your were talking I was picturing this grizzly old guy with a beard sitting in front of a giant mixing board with this big red knob on it, like a dial with a sharp pointy arrow sticking out so he knew where it was pointing. On one end of the scale it was marked "luck" and on the other end "karma" so when he turned it to the right more karma poured into the universe and when he turned it to the left, more luck. The more of one, the less of the other. In exact proportions. I'd never thought of it that way. It makes sense though. Either you believe in total randomness or total order or some kind of proportional mix of the two. I believe in some kind of mix, though I couldn't really articulate why. I guess I shy away from extremes and always try to settle for the middle ground.
J: Ok. So in that conception of the universe you just formed it's a hard problem to distinguish between whether a persons's success was achieved by luck or whether it was achieved by intelligence and it's not just hard because the terms we are using can mean wildly different things to different people. It's a problem that hinges on how we define ourselves.
Take me for example. I was born in the United States to two college graduates in 1980. My mother was an artist and a child development major and my father helped maintain a tool store that's been in my family for the last 80 years. So is this lucky that I find myself born into this situation? Well, it is given the definition we just agreed upon. On the day I was born I had put exactly zero effort into the world and so along those lines any rewards I reaped from my zero effort is incredibly lucky. Essentially everyone is born having done nothing and so not deserving anything. I could've been born in 1942 in Germany or in Rwanda right before the genocide. It is only a matter of luck that I happened to be born in the time and place that I was.
Now that's a particular way to look at the world, that we are individuals that are completely separate from the environment that we are born into and that what we deserve has to do with the work we put into things. I think it's worth considering this from at least one other perspective. For instance if I think of myself as a part of a long line of decision making that has succeeded in being in the world then I'm just the latest extension forward in an unbroken chain of tiny decisions that have so far proved its intelligence in the face of extraordinary circumstances by successfully continuing to exist.
P: I'm not sure I follow you.
J: Let me frame it in a more concrete way. I grew up in a family business that was started by my great grandfather in the depression. It's persisted for four generations. My great grand father, my grandfather, my uncle and my father worked most of their lives to keep it going and now I'm the fourth generation. It's survived partly by luck and partly by the intelligence of the people who created and maintained it. So is it lucky that I inherited it? Again, it depends on if you see "me" as an individual who is separate from the lineage of work and decision making that preceded my being born.
It certainly makes sense to see people this way. We want to hold people responsible for their actions, not the actions of their ancestors and so graduating the long chain of intelligence into individual people is simply a way to maintain a system of moral accountability. But for the moment, let's say this is true, that we are tiny extensions forward in a long line that's managed to continue to exist in the world and that the boundaries between one person and another is fuzzy at best. How would this change how we evaluated the notion of luck?
P: Well, it makes it much harder because "what is deserved" becomes a more open question since you are essentially blending the actions of many people over time. Let's just take your business as an example. I imagine there were intelligent decision that were made that helped preserve it, and I also imagine that there were lucky circumstances that helped it succeed. How you tell the difference between those two....I don't know. I mean, despite our inability to perceive those differences, they must exist. You could make the argument that 80+ years is a long enough time to demonstrate that the decision making involved was intelligent, as opposed to lucky, since if a business was solely operating off luck, it would've run out by now.
But....for me, this idea, that you can identify yourself with the work of your ancestors seems really dangerous because inequality is such a problem right now, and I could see people justify some pretty horrible things with it, depending on what "lineage" they chose to identify with. You picked your family, but maybe somebody picks "white europeans" and then uses that logic to explain the success of their particular group. I can imagine someone who inherited a vast fortune and who is just a terrible human being saying something like, "I deserve everything I've inherited and I'm not going to share any of my wealth with all of the losers whose families were too stupid." It seems like this idea promotes selfishness, that what is mine is earned and so it shouldn't be shared in a way that I think is important to a vital society.
J: I suppose my answer to that would be, well, that's an unintelligent piece of information, to think that people only fail because of bad decision making, and likewise, that people succeed only because of their intelligence. It's a view that lacks context and complexity. The second thing I would say would be, how long do you think people can exist successfully with that kind of anti-social strategy you presented? And the answer is...not long, because we are social beings that thrive on being competitive AND cooperative. You can't just have one or the other.
So when I think about the usefulness of that idea in my life, it doesn't seem like it justifies my own selfishness or lets me pat myself on the back, because first of all it implies that every single thing that's alive right now is a success. We've all made it this far. So that's incredible. That's a pat on the back for everything that exists. Secondly, if you take this idea seriously, it's a huge responsibility. As far as we know the universe is 13 billion years old. Life organized itself at some point and in an unbroken chain of decisions it's made it's way to this moment right now. What do you do with that knowledge, that you are just a tiny extension forward of something that is incomprehensibly large and unknown and that the decisions you make in your life will have substantial consequences on other lives just like your parent's decisions had consequences for you and their parents before them? So to me, the idea isn't inherently selfish. Sure, you can use the idea in a selfish way, but you can also use it in the way I just described as well.
P: But it's so hard to tell the difference between luck and intelligence and so it seems like if we can't really differentiate which is which, what hope do we have?
J: So the scope of the problem makes it easier or harder to determine whether intelligent action is taking place. For instance, if you are working in a closed problem set such as blackjack then it becomes very easy to determine whether someone is demonstrating an intelligent ability to exist in that system or whether they are getting lucky. You just have to watch them for an extended period of time and see if they continue to succeed.
It's not just blackjack though. This same basic idea applies to mathematics, baseball, IQ tests, kung fu or fishing. It does however get more difficult when you start to deal with things like human relationships, artwork, and the overall success of a human life. The more subjectivity, the more ambiguity, the more complex array of variables that interact in ways that make causal relationships difficult to discern, the worse we are at being able to determine intelligence.
But this is why I think we are attracted to and have created contexts which dramatically reduce the amount of variables we have to deal with and make it clear who is and who is not performing well. There is something comforting about being able to discern success in unambiguous ways. For instance, people can debate about whether Michael Jordan or Lebron James is a better basketball player, but no one debates whether Lebron and Michael are good at basketball because they obviously are given how consistently they've performed over the years.
P: Ok. So let's just say some success is earned. What if this "demonstration of intelligence" that you're referring to causes more inequality which causes an imbalance in the system which causes more suffering in the world and then the whole thing collapses? What then?
J: If we can't anticipate collapse, if we can't see five moves ahead, see how our actions affect the system and then respond accordingly then....that's pretty unintelligent. If our society collapses and we're thrown into chaos and millions of people die and the intelligence we've worked so hard to cultivate is lost then we've failed spectacularly at being in the world. Restructuring society so that the most possible people can flourish is a clear sign of a more intelligent society and I want that society because I believe it has a better chance at continuing to exist in the world. In order to do that though, I think we need to do some grappling with what intelligence actually is and develop strategies that help us identify it more clearly, because if we can't tell the difference between luck and intelligence then we probably aren't going to get better at being in the world.
P: Ok. I think I've come up with an example that destroys this whole argument. What if the human race was wiped out by some virus. All of our knowledge, all of our intelligence is destroyed and what takes its place is a planet full of viruses. Is the virus more intelligent than the human?
J: So an organism’s first responsibility is survival. This is true for an individual but it's more true for the society. I say "more true" because you can come up with some reasonable examples of an individual sacrificing itself for the greater good, but never the other way around. So in the example you gave, humans are struggling with combatting a deadly virus and for whatever reason all of our intelligence can't be utilized to solve this problem. So are we less intelligent than the virus? Well, biologically, yes. It doesn't matter how true a picture of reality we can paint if that information can't be leveraged to help us overcome real world problems.
Just think about it in relation to another animal besides humans. Imagine there's a frog that does nothing else but build elaborate castles out of sticks. They do this because it provides them protection. For thousands of years they are investing their energy in this practice because it keeps them safe, and so they get better and better to the point where you have these massive stick cities where the frogs live in relative comfort. But then imagine tiny ants come along and slip through the cracks and kill all the frogs. Was this immense amount of effort that the frogs spent on this particular defense intelligent? Well, it was for awhile, but it obviously wasn’t intelligent enough. Nature doesn’t care how intelligent you were yesterday. It only cares about how intelligent you are in this very moment, how responsive you are to the present danger that is always changing and shifting and coming from new directions.
P: You never really addressed my question about failure. The way you talk about it, such as the frogs "failing to exist in the world", it makes it seem like death is this big failure and so it seems like you're saying that all of the people that have ever died are failures. That just seems too harsh for me.
J: Failure has some pretty negative connotations but it's impossible to avoid. It's an essential part of being in the world. So the question is not, "How de we avoid failure", but, "how do we leverage our inevitable failures to eventually succeed?" or more to the point, "How do we turn our failures into intelligent information?" Well, this makes perfect sense until you get to the failure of dying. Now saying that death is a failure is inhumane, because everyone is going to die no matter what and it makes it sound like we're all doomed to fail, but that's only if you see yourself as a context-free individual that's totally removed from the thing that you're a part of and I don't think that's remotely close to being true.
P: I just realized that we didn't make any predictions. We just spent this entire time talking about...I don't know what. I feel like we went all over the place. How does this have anything to do with your project here?
J: It's an easy transition right? How are we going to avoid our destruction by a flesh eating virus or a meteor or a nuclear war or any other of the countless ways we could be done in? Leveraging our collective intelligence to not only see better into the probabilistic future but use it to know how to intuitively respond to realtime disasters that we can't predict. If there's anything I want you to walk away with it's this. Intelligence is not some disembodied intellectual pursuit that has little to do with our everyday life. It's a battle tested way of existing in the world in the face of an environment that is constantly testing us. So to bring it all the way back to where we began, the real question is, how can Artificial Intelligence help us with this problem? How can we leverage it, and more importantly, how can we keep it from leveraging us?
P: I appreciate the conversation. I still don't quite agree, but I'll have to think about why and come back and see you.
J: Sure. I guess I should end with saying that it's quite possible that everything I've laid out is wrong. The test of whether this is an intelligent idea is, first, how accurately does it describe the world, and secondly, how useful is it. Can you immediately do some exciting stuff with it or is it too unwieldily and complex that it just falls apart and is no good to anyone? It's likely that I'm wrong. The very best I could hope for is that I'm partially correct. It would be almost incredible if most of what I've said is true...but what else can we do besides test it out in the world and see if it survives.