Alright Michael, so there’s lots of definitions of game theory that we could use. One that I like in particular is that game theory is the mathematics of conflict.>>Hm, [CROSSTALK] that’s interesting.>>I think it’s kind of interesting. Or generally it’s the mathematics of conflicts of interest when trying to make optimal choices.>>because I feel like a lot of people have their own conflicts with mathematics.>>I think everyone but mathematicians have their conflicts with mathematics. I think that’s fair.>>I see.>>But do you see if you, can you see how worrying about the mathematical conflict might be a sort of natural next thing to think about after you’ve learned a lot about reinforcement learning? I guess then well the next bullet kind of, kind of suggests a trend. So, so we’ve been talking about decision making and it’s almost always in the context of a single agent that lives in a world and it’s trying to maximize reward. But that’s kind of a lonely way to think about things, so what if there’s other agents in the world with you?>>Right and of course evidence suggests that there are in fact other agents in the world with you. And what we’ve been doing with reinforcement learning which, you know, has worked out very well for us, is we’ve been mostly pretending that those other agents are just a part of the environment. Somehow all the stuff that the other agents do is hidden inside of the transition model. But truthfully it probably makes sense if you want to make optimal decisions to try to take into account explicitly the desires and the goals of all the other agents in the world with you. Does that seem fair?>>Yeah.>>Right. So that’s what game theory helps us to do and then at the very end I think we’ll, we’ll be able to tie what we’re going to learn Directly back into the reinforcement learning that we’ve done and even into the Bellman equation.>>Oh, okay, nice.>>Yeah, so that is going to work out pretty well but, but we have to get there first and there’s a lot of stuff that we have to do to get there. But right now what I want you to think about is this notion that, we’re going to move from reinforcement learning world of single agents to a game theory world of multiple agents and tie it all back back together. It’s a sort of general note that I think that, that’s worthwhile is that, game theory sort of comes out of economics. And then in fact, if you think about multiple agents there being millions and millions of multiple agents, in some sense that’s economics. Right? Economics is kind of the math, and the science, and the art of thinking about what happens when there are, lots, and lots, and lots, and lots of people with their own goals possibility conflicting, trying to work together to accomplish something, right. And so what game theory does, is it gives us mathematical tools to think about that.>>I feel like I feel like other fields would care about some of these things too, like sociology.>>Right.>>And what about, I could kind of imagine biology caring about these things, too.>>Even biology, I like the idea of biology. Biology. Why would biology care about this?>>Well, I guess the way you described it in terms of lots of individual agents that are interacting. Like, you know, creatures that live and reproduce. I feel like they, they have some of those same issues.>>Sure. So certainly biology at at the level of entities, at the level of mammals or level of insects, you might be able to think about it that way. But perhaps even at the level of genes and at the level of cells. Little virii and, and bacteria. You could possibly think about it that way.>>because they’re in conflict too, I guess.>>Yeah. Now there’s probably this notion of intention. It’s not entirely clear what that means here and I think implicit in the notion of what we’re doing here is this notion of intention and explicit goals as opposed to ones that are kind of built into your genes, but I think that’s a perfectly reasonable way of thinking about it. I think the, the lesson from this discussion though is that. What game theory sort of captures for us or what would like for it to capture for us, is ways of thinking about what happens when you’re not the only thing with intention in the world and how do you incorporate other goals from other people who might not have your best interest at heart or might have your best interest at heart. How do you make that work? And so if you think about that problem then I think it makes sense that it’s been increasingly a part of AI over the years, and in some ways machine learning has started to think of it as being a mainstream part of what we do.>>Cool.>>Hence, why it’s worth talking about today. Okay. Sound good.>>Gotcha.>>Okay. Let’s, let’s try to make this concrete with a very simple sort of example.