Blaise Aguera y Arcas (Artificial Intelligence, Free Will, Consciousness, Religion)
In this episode of the Judgment Call Podcast Blaise Aguera y Arcas and I talk about:
- What inspired Blaisee to work so close to the 'bleeding edge' of innovation and technology for so many years
- Is the missing 'explainability' of AI really a problem?
- Why the complexity of languages (and it's efficiency) is underrated?
- The impact of the 'surveillance economy' on customer perception - will it stay a 'cat and mouse game'?
- Why the protection of our privacy is "required for survival"?
- How much of an advance will GPT-5 be? Will we become 'data slaves' to AI?
- Is there room for 'personal opinions' with AIs? Will AI optimize for 'survival'? Does humanity even 'optimize for survival'?
- Why 'developed countries' have such low rates of fertility compared to 'developing countries'?
- Is 'utility' as a concept really at the core of 'How the world works'?
- Should we fear AI overlords? Or should we embrace them?
- Is 'Free Will' a useful illusion? Is empathy a necessary precursor to consciousness?
- What is the relationship between religion and AI?
You may watch this episode on Youtube - The Judgment Call Podcast Episode #48 - Blaise Aguera y Arcas (Artificial Intelligence, Free Will and Consciousness plus Religion).
Blaise is a software engineer, software architect, and designer. His TED presentations have been rated some of TED's 'most jaw-dropping.'
After working with Microsoft for seven years he now heads Cerebra, a Google Research organization.
Blaise Aguera y Arcas: I actually just wrote a novella on these topics, which I've just begun looking for a publisher for, but it's very of this moment and about those last topics that you were raising. I can send it to you if you'd like. Awesome. Yeah, I'd love to talk about it then. So I think that's a really good match. What is the core of the novel? Well, it's short. It's about an 80 page kind of novella and it's quite dense and it's a bit unconventional, but basically it uses the, I mean, I'm not a singularity person to be clear. I'm a little bit more than a little bit skeptical of a lot of the kind of Kurzweil singularity sorts of things. But Ray works for your employer now, right? Oh yeah, he's a colleague and he knows about my skepticism. But at the same time, I think he also has a couple of very valid points, one being that history is clearly exponentially speeding up. I mean, that's obvious. And also that I think that brain uploading and so on is actually quite a long way off if that will ever work. But we are certainly approaching a moment when artificial intelligences starts and needs to be taken very seriously, not just as machine learning models, but as real intelligences. And I think things like GPT3 are a bit of a wake up call in that regard. So yeah, the structure of the novella, it's very heady. It uses Walter Benjamin and his thesis on the philosophy of history as a sort of frame. And it sort of imagines that the moment that we're going through right now is sort of like an event horizon. And it has three sorts of chapters that are broken up into sort of the present, before times and after times. The before times chapters are in the form of documents or fragments that come from the past and that are actually used as part of the training of the ML. The present tense parts are a narrative that takes place over about nine days. And it's nothing earth shaking happens in that narrative. It's during COVID times, it's almost autobiographical and it's kind of very compressed. So it's not a lot happens, but you see sort of the development of the AI. And then the after times is written in terms of iteration numbers and is written from the point of view of the AI. And there's sort of mysteries about who is actually writing this thing, who is the reader, who is the writer, what's the perspective from which it's being written. So it's a little bit of a meta novella, kind of like Nabokov's Pale Fire or something like that where there's an unreliable narrator and you're unsure until the end what's going on. It sounds like great science fiction. I had a lot of fun writing it. I wrote it between my first and second COVID shots kind of a little bit of a fever dream.
Torsten Jacobi: Okay, okay. I might be able to do this. So you spent about two decades now, as far as I know, working first at Microsoft and now at Google, at teams that are really at the core there from what I understand at the bleeding edge of technology. And they are both technology companies, but you've chosen a job to really had the bleeding edge teams for, and I think now it is AI before it was more maps and more other topics. Why did this job choose you? How did you get into that?
Blaise Aguera y Arcas: Yeah, it's a good question. I've been very, very lucky, very privileged to be sort of in these very exciting times and places. And it's a little bit of a long story. My training, such as it is, is actually in physics and computational neuroscience. And my wife is a computational neuroscientist. We've read a couple of papers together back in the day. Okay. And in a sense, I feel like the dawn of the computing age was very much interwoven with the dawn of computational neuroscience. So this idea that computers are artificial brains, it's not a new thing. That was a core part of the entire concept of computing from the beginning. Even to the point where things like the logic gate symbols, I think I talked about this in one of the TED talks, the logic gate symbols are actually derived from the symbols for pyramidal neurons in one of the key McCulloch and Pitts papers from the 1940s that draws an analog between computing elements and neurons. And so that was very much present in the minds of Turing and Von Neumann and the other early computing pioneers. So I've always had a feeling that although these were kind of twins separated at birth, they're going to reconverge and have been sort of biding my time until they reconverge and working on other problems in the meanwhile. So the problems that I was working on in the teams that I was leading at Microsoft had more to do with classical computer vision and machine vision. There are certainly some parallels between the teams that I went there and the teams that I'm leading at Google. I was there for about seven years and I've been at Google now for about seven years as well, it's a bit less than 20 years, but I had a startup before that which Microsoft acquired. And classical computer vision is not very brain like. So the first TED talk was about C Dragon and Photosynth. Photosynth is a classical computer vision problem. But what began happening toward the end of my time at Microsoft, two things changed. One was about the company and one was about the technological milieu. On the company side, I certainly don't want to say anything negative about Microsoft. They were wonderful to me. I grew a tremendous amount at that company. But the company made a decision partly based on their failure to break into the phone market, the sort of failure of Windows phones. I think made it clear that it was destined to turn back to its roots and become more of a B2B sort of company. And that's a move that Satya Nadella has executed very effectively, which has made the stock price go up quite a bit. And so it's been good for the company, but it made it less the kind of company that I wanted to work at. For me, the most exciting problems and the greatest innovations are very much in I hate to say consumer, but in things that affect people as opposed to companies, I suppose. So it's become a little bit more of an IBM style company since I left. And I saw that change coming. And that made me think about a change for myself as well. But the other thing was this was in 2013. It had become clear by 2013 that neural nets were back, really back with the vengeance. This was after some of the really groundbreaking new results from convolutional neural nets that showed that computer vision problems that had been intractable for decades, being able to recognize what kinds of objects are in a visual scene, for example, were finally getting solved and solved in a fairly brain like way. I mean, I don't want to overstate the analog between convolutional nets and visual cortex, but it is a visual cortex inspired architecture. And it's certainly not a conventional computing sort of approach. There's not a program being run. It is virtual neurons being activated in cascades. And it seemed to me that, and to a lot of people, I think at that time, what we're calling deep learning at the time, was really rising. And I felt that this was going to change everything. So Google was the place that was the company that was really at the forefront and still is of that kind of work. And that made Google very appealing. But there was also something else that I was thinking at the time, and it was quite important, which is that Google is also a company that historically has kind of done business by running massive online services. And I think it's not a coincidence that they were also at the forefront of this new kind of AI, because this new kind of AI was very, very data hungry and requires massive amounts of training data and massive amounts of computation to train. And they had giant data centers and they had giant amounts of data. So it made a kind of sense that they were at the forefront of it. And I don't want to diminish Jeff Dean's sort of vision with respect to that. I mean, they had to have the right talent to recognize this, but that was one of the reasons that that opportunity was there to seize. And I felt like we were facing two possible kinds of AI futures, one in which these giant neural nets were all run centrally as services. There's sort of a small number of AIs, if you want to put it that way, that are serving everybody. And another in which it's much more decentralized and you and I have our personal AIs and every company, every room in your house has an AI. It's more like a society of decentralized AIs. And I really wanted to tip things in favor of the second rather than the first alternative of decentralization. And I felt like if I went to Google and tried to push for that decentralized approach, my odds might not be great because it was running so counter to the culture of the company as it had existed here to for. On the other hand, if I could succeed in making that kind of change at Google, that would really matter. And I thought I'd rather take my chances at going someplace where I have high odds of not succeeding but where successful matter than staying at a place where I'm a more known quantity, higher odds of success, but not sure that anything that I do here is going to really change the future in the same way.
Torsten Jacobi: Yeah, I think you already answered a couple of my next five questions. But I think this was a wonderful way to see this through your own perspective and the way, the vision that you have for what you're doing right now. And I think the unit is called Cerebra, right? Like Cerebra. Yes. Yes. And the AIs that you put out, like the AI models, like Federated Learning and Coral that enable to run AI on a user's device without syncing much to even work offline. And I think this is a wonderful way how you describe it, how we can maybe change the way people look at AI as this behemoth, you know, when we look at the Westworld and I think it was called Solomon, the massive AI that basically ran the world. I think everyone is very worried about that. Yeah, I mean, those anxieties go all the way back to like Moloch, you know, and the sort of the 20s and 30s exactly. It's an industrial revolution anxiety, really. One thing that I was curious about, and I think that was the two questions that immediately arose, maybe I already answered as partially one is, how do you decide from giving all these priorities and all these possibilities you have at Google, the way you put your efforts and what are you actually releasing? So it's been like the David Hume's problem, right? So we have all these options, but what are actually goes through your mind and through maybe other colleagues at Google, what do you actually put the resources in and you want to release? And what do you want to keep inside the company? And what do you want to push out there? And then the second question, maybe it's a bit related. But I think the problem with AI is that we don't know what's going inside the box. We don't know the reasoning of AI. That's a core problem right now that might change over time. But right now, that's a big issue. So we have to constantly validate it. We are worried about biases. And we don't know because we would have to basically go through a lot of data ourselves to see what's going on or run a different AI. But I feel, even if we federalize it, and I like that approach, we download a standard model that might have all the biases attached. So we run it on different devices, but we are just modifying existing model. So we might download other people's biases. And I'm not talking about racial biases necessarily, it's just decision biases that we are not aware of. And that seems as scary as having a central AI to me.
Blaise Aguera y Arcas: Yes. Yeah, these are very good questions. So you've asked two giant ones. Let me try and take them one at a time. We have time. We have time. Excellent. So first of all, you're asking about, well, in fact, let me begin by taking the bias and fairness and ethics one. And then we can go back and address the question of what Google releases and whatnot and how that works and what I choose to have the team focus on too. So first of all, the question of bias and explainability, there are different questions. Let's begin with explainability for neural nets. So explainability is something that is a question or a charge that has been leveled quite a bit at neural networks because they don't look like code. So you can't really step through it in any meaningful way. Instead, you have these massive banks of filters, in the case of a convolutional net, for example. So it's just tons and tons of numbers. And that is the net. So how do you explain how it makes a decision about how to classify an object or what to reply? It's hard to make a legalistic sort of sequence of deductions about how a certain decision came about. But I guess I would point out a couple of things. One is that in complicated real world software systems, if you ask the question, for example, before neural nets were involved in Google search, when it was all classical computing, how did a ranking decision there get made? That answer would be extraordinarily complex as well. When you have millions and millions of lines of code that have accumulated in order to keep continually improve a problem over generations of software developers, all working on bits and pieces of the thing, you also don't end up with an explainable system. You end up with something that, in theory, you could dump a stack trace or whatever. But if you've ever seen a stack trace from, I don't know, a crash on your computer or something, you know that that doesn't look very explainable. In order to debug the simplest sort of memory overrun or something like that, programmers might spend months trying to dig through some particular stack trace and how to reproduce it.
Torsten Jacobi: But search problem right there is always an expert who knows what to do. I have these problems when I do my own development, my own programming, and there's always someone on Stack Overflow who knows if Barkin can fix it or on GitHub. But I feel the AI there is nobody left. There is not this one expert.
Blaise Aguera y Arcas: If it's a programming error, then there's generally somebody on Stack Overflow who has seen it before. But if we're talking about a bunch of code that has been written over the years to make a judgment call, which is what we're talking about now, we're not talking about a programming error. We're talking about judgment calls, then I think it really is much, much more complex. The kind of questions that we'd be asking ourselves are like, why did my business become the fifth search result as opposed to the third? I could answer the question in some kind of pedantic way, well, this branch of code, that one, that one, that one, but a satisfying explanation would be just as hard as if it were a neural net. And in fact, in those kinds of cases, I would say that neural nets are actually somewhat easier to explain than large classical code bases because, unlike with a classical code base, you begin with clear set of training data and a clear objective function that you're trying to optimize. And so that is actually quite a compact formulation of what and why that you don't get from the accumulation of the choices of thousands of engineers. I don't want to... A very good argument. I'm not trying to dodge the explainability challenge because I think it's major, but I'm pointing out that I think we often have a sort of idealized strawman that we think of as being the explainable case, which is not generally there. We're not starting from an explainable spot either. And then if we take this a little bit more meta, I would say we think about humans and human decisions as being explainable. That's the basis on which the entire legal system is based. If a judge makes a decision, they have to be able to say why. That's what the law is all about, the fair application of it, et cetera. A, the law is not fair. And there are many, many studies that show this very clearly. B, the narrative structures that we impose in order to explain our actions and our decisions. Again, there's a huge body of work in psychology and in legal theory that show, well, in legal theory less than I would like, but in psychology certainly, that show that we're very good at making stories. And those stories might rationalize a series of decisions and actions, but there's not necessarily the causal relationship there that one might wish to put it mildly. We're very good storytellers. We're very good modelers. We model each other socially. We model ourselves. That's what self consciousness is. But the idea that that model is the actual underlying thing is completely false. You have trillions of synapses and you're probably quadrillions. I don't know how many synapses in your brain. And your model of yourself and of your decision making processes has nothing to do with the detailed firing of all of your neurons and sophisticated model of all of that. It's a story that you tell yourself.
Torsten Jacobi: Yeah. This is a really good topic. I think we can spend an hour on this. We could. I love what you say, but I think there is something magical and I want to maybe get back to this if we have time to this narrative as a way to not just explaining things, but also as a cloud storage are really complex issues. So we can encode things that are extremely complicated and relatively simple. It's like when you have a zip file and it's three gigabytes and you encode it into.
Blaise Aguera y Arcas: I agree. I agree entirely. I agree entirely towards the language is super compact and powerful. And you can use it to not only to reason things through, but also to make changes in your thinking process in a very compact way. Unlike current machine learning models. You can say, for example, oh no, you're in a country now where the where the screw tops on everything twist the other way. So it's clockwise rather than counterclockwise. You just say that once with those words and you'll do the right thing every time from then forward.
Torsten Jacobi: Yeah. Elon Musk is on this trajectory where he says language is so inefficient and the encoding doesn't work. I think he's missing the point that there's a lot of learning that is in the layers below that. That is very, very efficient compared to we don't have to look at a lot of data just to exactly to your example can with one abstract message can actually change our model completely, which is amazing.
Blaise Aguera y Arcas: Yes, I'm entirely in your camp about this. I think language is enormously powerful. And both as a way of learning, transmitting information, building up cultural information over time, which I think is most of what human intelligence is, by the way, I think it's cultural, not individual. So I'm very much in the same camp. I disagree with Elon Musk strongly on this. And I think that the language models, I mean, you were asking earlier about GPT3, I think that the progress that we're now making with language models is bringing us closer to a world in which you can have exactly that kind of discourse with machines. And that's very important for explainability as well as for efficiency of learning and all kinds of other things.
Torsten Jacobi: Yeah. Circling back for a moment, before we go into these deeper issues, when I looked at Coral, right, so that's one of the software packages you released, and I think this is an open source release. Yes. I was really excited. And I thought, oh, my gosh, I can I can just build crazy AI with it. But in the end, the models that come with it, I can do my own models, you know, I can train whenever I want. But the pre predefined models that are already available on the website right now, they're really boring. They're object recognition, right? They're they're so basic. I felt like they I read something from the 70s. So we talk about AI, it's getting finally got it's it's in the limelight, right? And we feel like it's taking off. But then I look at Coral, which seems extremely powerful, because it's a federated model, that you can run it on each device. And I was expecting things like, I don't know, cancer recognition or something really powerful, right? Right. And it wasn't in that prepackaged model, it doesn't mean you can't do it with it. But I was kind of hoping there's some science fiction in this. Why don't we have the science fiction in our hands yet?
Blaise Aguera y Arcas: It's a good question. And this this sounds like it speaks to your, your other question about we know what we released or don't release and so on. Yeah. So so first of all, there, there is actually a Coral demo for for online cancer detection. And I don't think that it's a model that we have we have publicly released. And the reasons for that are that the kinds of liability that come with, you know, with with Google releasing a cancer recognition model are, you know, I mean, that's a that's a medical that's a medical grade thing that requires a level of study studies and validation and, you know, and regulation that, you know, historically, the company has not has not been prepared to take on that is changing with Google health. So, you know, we do now have collaborations going on, you know, with with Google health on, you know, in these kinds of areas, but it's a long, slow, arduous process. You know, if if you were to ask me, like, do you think that do you think that that that that all might be a little bit over regulated? I think the answer is probably yes. You know, there are reasons that we have heavy health regulations, it's to avoid, you know, unsafe, unsafe drugs and unsafe medical procedures from making their way out into the world, it's to avoid Tuskegee experiment kinds of horrors. So there are good reasons for all of this. But, you know, but it also means that innovation in that space can be very, very slow. It's one of the reasons that I'm, you know, delighted that the that the vaccines managed to happen so quickly, despite despite all of this, I guess, when we really care, we can we can fast track things, but it's hard. But but more, but more broadly, you know, you're saying like, you know, that's right, we have, you know, object recognition, very simple speech recognition, very simple, you know, person counting, you know, this doesn't seem very sci fi, right? It seems like it seems like stuff from the 70s. There were models in the 70s that did this kind of stuff, although, although none of them, none of them with anything like the quality that a deep neural net can. So, you know, they're they're doing old problems with much higher quality. But the reason that we that we focused on those very workaday things for coral specifically is because, you know, it the coral is not really so much about about sort of being cutting edge with respect to what the AI is doing, so much as being cutting edge about how it's doing it. So that project specifically is for solving problems like, you know, if you want to put a sensor in a, you know, in a department of motor vehicles or something that that says how long the line is, you know, how long is the queue, you know, in front of the desk and then, you know, for that to go on a public website or something like this, then it would be nice to have a system for doing that that's very simple and appliance like and where all of that computation, you know, that turns the video into this integer, you know, how many people in the line all happens locally in a way that doesn't violate privacy. So, you know, it's very workaday problems like queue like queue length and so on that, you know, that are that are really at play here, you know, that's that's 90 90 percent of what, you know, of what of what clients of this kind of stuff want. And what we wanted to do was show that those things were possible to do without setting up surveillance systems that have all kinds of negative side effects. So, you know, it's a different sort. It's not that's not the sort of cutting edge research on neural net architectures or or applications, but more sort of, you know, let's take the things that everybody needs and they're common, you know, among many, many industries and show a different way of doing those. And now, you know, obviously, there are there are researchers in my team and in various other parts of Google research that that work on on much more sophisticated applications or, you know, and, you know, architectures that do things that are really kind of shocking in that and there are a little more science fictiony than counting people, you know, or recognizing smiles. And most of those, most of that work gets published, you know, in very short order. So, you know, there's a huge number of papers that come out of Google research. And many of them nowadays are coming out with code as well. So, you know, so, you know, it's reproducible and it's, you know, it's part of the open research community. There are some checks and balances on what comes out. I mean, you mentioned GPT three, the open AI team decided when they made that that there were some dangers in releasing that model, you know, in that it could be weaponized in certain ways. So, you know, that's the main thing that we think about, you know, before before releasing, you know, are there are there risks? Are there dangers to making one of these one of these things public, but by large, we're very, we're very open about what we publish.
Torsten Jacobi: Yeah, I think the humanity owes Google. And I think we reward Google very nicely with this market cap. So, I think it goes both ways and they're really cheap loans, you know, and zero percent interest rate that were meant for struggling airlines and Google gets them anyways. So, I think it goes both ways. There's a lot of love currently. I think where the love is a little bit in doubt is the topic and you just mentioned that is the question that we all feel like that we become, we become, we're fully surveilled. And that's true. There's all this data there. We used to not care about it, but there's not much so much data now. There's so many more sensors and so much AI that's running that kind of reads our brains better than we can read it. So, people are becoming a little bit concerned. And obviously, Google says, well, we need that data to sustain our business. We give you free services. And I think everyone is kind of okay with this initially. And then you realize, oh my gosh, there's like 2000 data points that can Google can read from you. And using those 2000 data points, they know you exactly. Like, there's no doubt when you get married, they know the date before you even propose. So, it's really scary how this tech works because what we are creatures of habit, we are social creatures. So, we behave more like other people than we believe in ourselves. But we have this illusion of free will. We don't think this should be the case. And I know Google does a lot of anonymization and it plays around with giving people their privacy. But in the end, they need the data to make money. And there's always a market for alternative data. Like, I was just talking a couple of episodes ago, how many more startups are now coming up with alternate data. And like every sensor, basically, you can create a company around it. So, you sell this data and then you will make money from this. Maybe not trillions, but a couple of million is always in the game. And the question is, it's not really a Google issue. I think everyone has that issue. But Google, because it's better, it's more hat. How do you think this will play out? Will people eventually rebel against this surveillance industry that we are in? And especially Facebook, I think Facebook is the worst offender right now. But, you know, everyone has the same problem. Well, do you think that is actually drawn lines? Because I feel like everyone who draws up these lines is 10 years behind. So, by the time these lines are drawn up and say, okay, you can't have this data, like what happened in the European Union, it's on the web. Nobody cares about web data anymore. You just don't care. Because device data is what people want or sensor data. And they are not even covered by GDPR potentially. And then the reality is always 10, 15, 20 years ahead of what's just regulated. So, is this a cat and mouse game that will keep going on forever?
Blaise Aguera y Arcas: This is a great question. And it's very close to my heart because, you know, I mean, the concerns that you're raising are exactly the ones that brought me to Google and that kicked off, you know, all of the work of my team. I mean, they animate all of the work of my team. But I want to, before I dig in and answer in detail, I want to step back for a second and also just reset the critique a little bit, perhaps. So, I mean, I've read Shoshana Zuboff's Surveillance Capitalism with a lot of interest and many of the other books that raised these kinds of critiques. I'm friends with quite a few people outside Google who are very vocal advocates for privacy and very sharp critics of Google and companies like it. The social dilemma was probably the popularization of a lot of these ideas about a year ago or something. So, the social dilemma has this kind of recurring animation of a sort of puppet that is like an animatronic version of you that lives in the data center that becomes so precise that you can be predicted completely and that's then the basis for kind of futures market in your behavior. I want to, so that is a terrifying vision, but I also want to temper it with the reality, which is that, you know, yeah, I actually don't believe that people are as unpredictable or unlike each other or individualistic as they believe. I mean, I'm a critique of individualism in multiple senses. I think a lot of where our intelligence really lies is social and societal and not really individual at all. But at the same time, if we imagine that these models are all seeing and all powerful and understand all of our hopes and dreams and wishes better than we do, I think that is not the universe that really obtains inside these companies. It's almost the opposite problem. I know because there are a couple of teams within my own group within Cerebra that have done personalization models for other parts of the company. It's not the kind of work that I generally have people in my team doing for reasons we'll get into, but I do know how that sausage is made and they're actually not that great. The problem with recommendation streams and things is not that they are so prescient that they know you so well that they can anticipate your every interest, but rather that they're too simple and too reductive. And frankly, that's one of the reasons that I believe that we end up with a simplified discourse in a lot of social media and the polarization. That sort of polarization and simplification of the discourse comes because of emergent behaviors. It's not just the ML systems, but emergent behaviors that the ML systems are part of that are highly reductive and that just funnel people into a small number of modes rather than having a real model of Torsten and what might interest him. In some sense, a really good ML model would have a very different effect, I believe.
Torsten Jacobi: I fully agree. I think this is one of the things that are least understood about what happened since 2015 and since we were basically motivated by an engagement algorithm that Facebook invented, so to speak, and then rolled out publicly. And the deflation of likes is my theory has really led to this depression of the last five, six years. Mental depression, not necessarily economical. Now we have the economical too. I agree with that. And the incentive is always there to give you the thing that you're most likely to click on, right? So there's an anti explore pro exploit to sort of bias. It's terrible. It's terrible what it led to. I think it was well intended. And if I would have worked at Facebook at the time, I would have propagated that too. And I wanted to push this out, but it's terrible. But there were unintended consequences. Yeah. To you, Matt. So they are not evil, I would say, the engineers, but they also need some help from psychologists and people who think a little bit outside the box, but they all want to make money.
So they do. Although the idea that psychologists have the answers or ethicists have the answers or whatever is also false, I think. True. I mean, none of us could have predicted. I say that I'm sure that there were some predictions that were accurate, even in the very, very early days. But I suspect that they were drowned in the noise of many, many other predictions that didn't come to pass. Well, if you read Socrates, I think you would have made that prediction in a heartbeat because you realize the 90% out there will have different opinions and engagement and like this five second engagement, not the same as the five hour or five day engagement. If someone comes up to measure this, what actually sticks, what stays in our mind instead of what we just click on in the first five seconds, I think that's the holy grail that they can solve it. But I agree. And so like two people, I mean, two modern thinkers who I think could probably have done a pretty good job of predicting it are Danny Kahneman and Amos Kurski. Right. So Kahneman and Kurski, they're like fast thinking, slow thinking sort of thing. So yeah, I agree. I don't think it was completely unpredictable, but I also think that a lot of these are emergent effects. I mean, if we think about, for example, the genocides in Myanmar, and social media is having been a major factor in the way those came about. You know, it's, I mean, that's, you know, that's Facebook and WhatsApp, I believe, you know, and something extraordinarily evil came out of that. But the idea that there is a single actor that you can pin the blame for it on, I think is a little bit. Yes, I mean, it's an emergent phenomenon. You know, René Girard, that's how the mind works, right? So we need to scapegoat and then the skateboard actually saves us. So that's how mine used to work over such a long time. And it seemed to relieve us of that pressure because we can actually move on because we have the scapegoat. So it moves around. Who is that scapegoat? But I wanted to say the individually, I certainly is reductive. And when I look at the code, I think this is just random nonsense. So why would anyone worry about this? But the problem is from a consumer, it looks different, right? So say Alexa listens to you and then Amazon gives you the predictions about cat food. But I don't have a cat, but I get cat food ads the next day, because I talked about cats, and I maybe wanted a cat. Or maybe it's a lucky thing, right? So it's 99% of the ads still, I feel I'm not very well targeted and they come all the time to save ads on YouTube. But there is this day where I think, oh man, this is really creepy. And then as a person, I extrapolate this one event that's statistically not relevant. But that's where you pin the sense of creepiness on to it, right? Yeah. And then for me, all the ads are creepy. I mean, this is how human recognition works. Like we see one one accident on the freeway, and we think driving is dangerous. But then two weeks later, we think no driving isn't dangerous. So it is something weird in our mind that it's very different than the statistical dribble learning that AI is. And I think engineers also have in their mind, they think, oh, it's not relevant. But no, it is relevant, because you only get a few shots and then people just, they just sign off because they think it's creepy. Well, you're talking about, you know, just from a sort of PR perspective or a business perspective, why it's relevant. I mean, I think it's relevant for two reasons, neither of which is about just big statistics. One of them is chilling effects and our sense of our sense of individual agency and our ability to be ourselves and have that sense of privacy, right? Which is not the same as security. It's not the same as secrecy. Privacy is a real thing. Anybody who lived through the DDR or other surveillance states understands what it feels like to be in a society where you don't have privacy. And it feels awful. It feels awful, even if nothing is done with the information, or even if the surveilling entity doesn't have any problem with you, right? Or it doesn't have it in for you. But the other, you know, the other and even more serious problem besides chilling effects and the psychology of all of that and the importance of privacy from a psychological standpoint is that if you have, if any entity has the kinds of records that we're talking about, like let's say that you have a, you know, a device, you know, in your house that is listening to you all the time and storing records somewhere in the cloud of everything that is said in that house, that is a, like a sort of democlease hanging over your head. And it has very real civil liberties implications. You know, even if the stewardship of that data is in the hands of a really good steward, you know, it's still, it's still black mirror territory. And if regimes change, you know, if liberal democracy starts to, starts to collapse, you know, and those things are there, then, you know, you can go to a really dark place societally. So I think these are, you know, I mean, when I push back on, like, you know, the models are not that good, et cetera, I'm not pushing back on the, on the problems of surveillance. I think they're, I think they're, they're very real and, and, and they very much animate, you know, my work, as I was saying. Yeah. I mean, I look at it from, from, from Friedrich Hayek's perspective, you know, that you have to be free of coercion. That's the goal. Right. Because we know that in that, that area, we develop the best. And this is not an altruistic goal or that, that I'm concerned necessarily. I mean, I am concerned about humanity, but it's not necessarily like a, like an empathy thing for me. It is, if you don't achieve this, we will all suffer and we will die. But someone who's going to do it better will take over. That seems to be the learning from history. If you have an entity, like I saw this for myself, going up in, in the DDR, in Eastern Germany, we had the perfect example, you take the same kind of people, you put one in socialism and they're very restrictive, but very utopian, very, very well intended and very efficient in that sense, very efficient bureaucracy. And then you have the other side, which also has efficient bureaucracy, but much less and let it develop freely. After 15 years, the verdict was out. Everybody was going on. It was never even close to coming back. To surprise a lot of people who were so enthusiastic about it, like my own parents who were very enthusiastic about socialism, communism, but it would, it doesn't work because it's coercion and these static models, they just don't work for long term. I think we as humans instinctively know that. I think this is what we create this freedom so much. Yeah, I agree with you. And there are studies, I mean, there are small n and I would need to go back and look in detail, but even things like rates of organ donation varied quite a bit between East and West Germany. I mean, paradoxically, because you would think that in a communist and socialist environment, there would be more willingness to do for others. But in fact, those psychological weight, those chilling effects seem to actually push the other way. I'm not, I mean, to be clear, like I am not personal. This is my own views. That's not Google or whatever. I'm actually very much a proponent of universal basic income and other kinds of socialism. But I don't think that's very socialist. I love UBI by now. I don't think it's very socialist, but the social model, what I'm worried about, the socialist model is the restrict chance that you had to put on it to keep you alive, right? Your free healthcare and free, free bread. It's not a bad thing. It's not a bad thing. So I think we're in the same spot. I mean, I worry about surveillance. I worry about limiting freedom. I worry about chilling effects. There's lots of things like free healthcare, free education, universal basic income are kind of orthogonal to those points. I want to go into one thing that I have been thinking about. And I think and you know, I had one of the founders of GPT three was on another podcast. He wasn't here. And he, one of the open AI developers, he basically said, you know, there is a really good chance that GPT five can look to basically everyone on this planet like it, like it is conscious, like it has a real idea of what's going on. It could be very human like and not just in a touring test, but, but everyone who interacts with it in a digital way. Yeah. And one thing I think that's missing from GPT three, it knows so much. Well, we don't say it knows, but it gets so many things right that look like poetry to us, but it's kind of random. So people say, well, this is just a random outcome of statistics. And if you shoot enough darts, some of them will look like poetry, right? So that's, that's, that's kind of the answer. But what he said is that what's missing is the user correction light is the Google click street Google Google search engine. A lot of people say you might correct me is not just the AI and everybody now could come up with the same AI. The benefit that they have is that they have the click stream and the click stream reduces like, say the AI is only 90% correct or 80% correct it with every iteration that gets better because it takes into account the click stream. So basically humans become basically just error clicking machines, so to speak, to the real AI. And that's what he said with GPT five taken into account all this user feedback, which they don't have yet. And they're very weird in releasing it. I feel that's a mistake. But that's obviously their, their call. But once they have enough user feedback and they get to 100% or 99, 99, 9% of correct decisions, he said, nobody on this planet might be able to figure out if this is an AI or not. When you say, when you say, um, correct decisions, I mean, I'm, I'm, um, human like decisions, or better than human, let's put it this way. So that's, we did never any correct decisions, so to speak. It's, it's, um, I mean, I think it's a, it's a puzzling framing. Um, I think I disagree with it. Um, I don't, I don't disagree by the way that, that, um, you know, that GPT five or, you know, if not GPT five GPT 10 or whatever will, you know, absolutely be able to pass the Turing test. I think that's, I think that's highly likely. I don't see any reason why not. I mean, the, the, the progress, you know, in language models has been astonishing. Um, I think it's a very interesting question to ask, you know, well, if you can't whether, you know, that there isn't anybody home, does that mean that there is somebody home? You know, right? This is a, you know, that's a profound question. It's basically the, it's similar to the question of whether there is such a thing as a philosophical zombie. And, you know, we could certainly spend some time on that one. Um, you know, Turing's own relationship with that question. I mean, I think that the Turing test is sometimes a little bit misunderstood. Um, you know, it really is basically saying something like, you know, faking it is making it. And the parallel has sometimes been drawn with his, with his sexuality as well. You know, like, what, what does it mean to pass? You know, to pass this straight or, or, or what have you, you know, is there a difference, right? On the inside or not? You know, he was saying, you know, it's perhaps tongue in cheek, perhaps not that, you know, if you can, that nobody else can know what is inside you. So if you can behave, you know, in, in way X, Y or Z, right, then, you know, then what, who's to say that there's anything else, you know, anything other than that, that, you know, that is reality, right? That, that is empirical equals reality. You know, um, so that's a, that's a really interesting question. But the, you know, that this is going to get, you know, quote unquote solved by having a metric or a number that goes up and up and up and that the way to get the metric or the number right is to interact with billions of people, billions of times strikes me as, as, as a little bit of a mixed metaphor or taking an approach or an idea that worked in one context and applying it in a place where we have no reason to believe that it will work. I mean, that's not how humans work. That's not why we are what we are. You know, we're not the aggregation of, you know, trillions of interactions, you know, with, with, with lots and lots of other humans that tell us when something is human like and not human like. That's, that's not how it works. I don't know about that. I don't know about that. But because, you know, when I think of children, what they do is they download culture. As you said earlier, we are cultural beings. We are not, we are not a computing engine, but the outsource computing long time ago to the club, other people. So what we do is we download all this knowledge, which is kind of the model, right? That's the standard model. And then we go out in the world and we refine the model. We, we generate our own layer of better models on top of that. But still most of the knowledge generation, I think this is really popular and lately that people say, especially economists say, you can't be better than, than what's already out there. Say you come up with a new theory. I don't know. You say something political and people say, no, how would you know? Because it's impossible because all the information is already in the market. You basically, there's no way you can advance on anything because it's already in an equilibrium. It's already out there. So the market is full of information. So people say, well, I basically, I direct my own decision making to what is the mainstream consensus. Kind of the people say that's more female like, but male like, but I think this is very popular now where people say, I don't want my own opinion. I just look at whatever news feed whose most authoritative say, I look at hacker news for, for certain hacker, hacker news, so to speak. So I always find the source of authoritative news. And this is what I adopt unquestionably because by definition, I can't be better. And what those are actually other, other humans, right? So the outsize knowledge generation to other, to other humans. And we have this tiny, tiny sliver where maybe we add some actual knowledge, but for most people, this is more theoretical than practical. They don't really interact in this knowledge economy as, as an input, they just consume it. And I think this is kind of the same, what I see with AI very, very soon, right? So, so they, they don't learn from, from their own experience really, they are 99, 99, 99% from other machines. And I think this resembles our human homo sapiens approach perfectly. Well, I mean, I guess the first question I would ask, I, no, no, I mean, it's, I think your, your, I mean, your theory is, is interesting. It's one, it's one that I have, I've heard things sort of like it articulated before. It is different from the way I think about it. Me, we, we're not, I mean, first of all, we're not trying to optimize something. I mean, I think this is, this is a common misconception. You know, when we build ML systems, generally, we do have a specific loss function, you know, a specific thing that we're trying to optimize. Although with unsupervised learning, which actually is getting a huge amount of traction now, it's not always so clear what, what that is. And that's, it's also not so clear with, with GANs. Well, the loss of survival for humans, right? So, but we can do the same thing for, for, for machines eventually, right? There is a propagation of knowledge to the next generation. I disagree. I don't, I don't think that, I don't think that survival is, is nature's loss function. I, I mean, how can I, how can I, how can I put this? Think about, I actually talked a bit about this in my, in my, in my NeurIPS keynote from, from 2019, from December or November 2019. But I think that, I think that it's actually fairly easy to disprove, mathematically, that, that we, that we lack, that life lacks a loss function. And the way, the way to notice it is as follows. You aren't a, a sort of agent in a static environment. You know, really what we have are societies. We have groups of agents, you know, interacting with each other. And, you know, even your own brain is a group of agents, if you like, you know, neurons interacting with each other, you know, they have their own lives, you know, every cell in your body has its own life. And it's kind of, it's kind of like those, it's societies all the way, all the way up and down, if you want to think about it that way, you know, from, from single cells to what we think of as organisms, to what we think of as societies. And, and so now, you know, you already have to ask the question, well, survival of what? Exactly. You know, what, what is the thing that is being optimized? So, for instance, you know, the cells in your body, you know, what are they optimizing for? I mean, they obviously, you know, they obviously are, they have to work together in order to keep you alive. And the fact that you are alive in an organism and nourishing them means that they can sort of relax, they can lower their guard in a certain regards, right? They don't have the same hard life that an amoeba has, you know, where it has to kind of go and do everything on its own. But the idea that every cell in your body is trying to optimize for its survival is certainly wrong. Your, your neurons do live, do live for your, you know, some of your neurons, at least live for your entire lifespan, the cells in your cheek, you know, or in your gut, you know, turn over very, very rapidly. They're not trying to live as long as they can. In fact, when one cells flip over to the dark side and try to live as long as they can, we call that cancer. You know, and, and, and in fact, anytime you have, you know, entities interacting with each other, even if they each have their own loss functions, what you actually get out of that is a dynamical system in which there's a kind of predator prey dynamics, if you like, or, or pursuit kind of dynamics. And, and those dynamics have what you would call in, in math, vorticity, meaning that the trajectories in whatever kind of phase space you choose to look at it from curl around each other, they curve, they're chaotic. And, and, and the thing is that, you know, anything that has curl or that has chaos of that sort does not look like gradient descent. You know, any kind of gradient descent process has zero curl and is all divergence. Yeah. So, you know, when something is curled, when something is chaotic, that means, that means that, that you can't actually talk about the positive about, about something that is being optimized at that level. There's no pattern. Yeah. Well, there is a pattern, there is a pattern, but there isn't, you can't say such and such is being optimized for. And, and by the way, this is a theory that economists, the, the, the Nobel winning economist, Kenneth Arrow, you know, what he won his Nobel for was a series of impossibility theorems about voting. And this is from way back, I think in the 40s. And it was exactly the same observation that, that you can't have a perfect voting system, because, you know, whatever you have a bunch of people, you know, kind of developing consensus right through voting, you can no longer say, you know, what is the entire, that the entire vote is fair or is, or is optimal in any sense, no matter what the voting system is. I wanted to get at something along those lines. And I, I, I'm not sure if the cell level is the best because we're looking at the individual level, right? That's where the loss function, when my mind should come into play. But what I wanted to get at, yeah, it's, it's, it's when there's a lot of fear that say, assume we have these super intelligent machines in five years, not realistic, right? But 50 years, maybe 5,000 years, 100%. And there's a lot of fear about this. And the best, and I had David Orban on, he says, you don't have to fear it, you just, you become a hybrid, right? So you become transhuman. And that has a really bad ring to it, because we know that humans survived so well through all these challenges. So now we create something that could squash us like we are ends to them, right? That is, that is, Sam Harris's, um, talk. Yeah, Sam Harris has argued this, of course, Nick Bostrom has argued this as well, and super intelligent. Yeah, but here's my answer to this fear is machines have the same problems we have, right? So they won't just go out and just optimize for very different functions than we are our function. And you say it's not, but to an extent, our function is that we want to survive and populate the universe kind of, right? So we want to create something more productive tomorrow than we have today. Whatever productivity means, it's maybe machines is technology, it's better, better knowledge, better philosophy. I think this is definitely something we, we did up if we really optimize for it's good question. And I think machines have the same problem. So morality, software upgrades like religion, hardware upgrades like better organs. Can I pause you for a moment and just ask a seemingly off topic question? I mean, you are, I just, I feel like there are some hidden assumptions in what you're saying that I'm, that I question. Okay. I mean, what do you think is the number of children that actually do, do you have children of your own? I do. How many? I have twins. Two. You have, you have two. You realize that that's only at replacement level. That's not growth. Yes. So far, yes. Do you intend to have more? Yes. And do you think that, do you think that as a whole in society, you know, people in the developed world, which I think you would argue is progress, right? Or is, you know, right, represents some kind of arrow. Do you think that that is a growth population as a whole? You know, that, that is to say that, that, that, that developed countries are out competing in numerical terms, the, the less developed countries that are, you know, quote unquote, less advanced? Currently, no. No. And why is that? That is a good question. Because we have more resources. We have more resources. We have more resources. So if it's all about survival and growth, then why would we not be having twice as many children? We could, right? We have, we have more money to feed them. You know, maybe, maybe the level to look at is not developed countries or is not developed countries. It's communities within those that follow a certain belief system. But because countries are relatively random assignment, especially outside of the, of Europe, but even Europe, it's pretty random. And people think about borders. There were no borders just a hundred years ago and 200 years ago that the idea of a nation state was not really formulated. That's true. But if we look at, if we look at all of the countries on the earth by GDP, and, you know, you make that one axis, and you make the other axis fertility, you will see an extremely clear relationship, whereby at high GDP, fertility plummets relative to a low GDP. So why? To be honest, I would love to know the answer. We see, we see this, there's a lower childbirth rates. People want less children. So you do a lot of work to only have one or two children. Yes, yes. And they also go into, there's an enormous amount of infertility in, and that maybe might be slightly age related, but it's, it's huge compared to 100 years ago, especially with men. But it doesn't exist in a developing world that seemed to have way more pollution from, from really broad stroke observations, not down to the individual. So there seems to be some dynamic at play. I don't know if it's nature. I don't know if it's humans, if it's some maybe super either controls us. Once we reach a certain level, fertility drops off voluntarily or involuntarily, it just goes almost to zero. Yes. The data point is correct. They do. Our world in data has very good, has very good sort of charts and graphs about all of this. And do you know the answer? I do. Or at least I think I know the answer. I mean, largely it's about choice. You know, in, you know, in, in much less developed countries, you know, women generally have fewer rights, birth control and, you know, in other kinds of fertility controls are fewer or harder to get. And, you know, and, and the, the age of marriage and the age of first, first childbirth are much lower as well. And basically, you know, less, less agency is being exercised, especially by women of how many, of how many children they have. And in the, and the absence of agency, you know, basically, you know, the maximum fertility, right, is just, you know, that all sex is unprotected. And, you know, you have babies at the maximum possible rate. And then you'll end up with, you know, a dozen or more per couple. And, and, and in developed countries, you know, the reason to first order that we don't have all of those babies is because people choose not to. And, and, and I raise this because, you know, this absolutely flies in the face of this, of this sort of growth oriented or Darwinian, you know, thesis that you're propounding that the, that the point of life is, you know, is, is sort of, is, is to maximize, you know, numbers, maximize volume. It's, it's, I mean, when, when the moment we are able to choose, that is not in fact what we choose. We choose something else. Yeah, I don't know. Because I don't know if we really can choose this enough of that's something that's given to us when you look into, into, you know, we, we, we double population for say four billion to nine billion. We say, oh, that wasn't voluntary, right? We can say that. But then it's always who wins these struggles between groups of humans. Well, it's typically the one that's more productive and who's more productive, the one that's more innovative. You cannot have long term productivity growth without innovation. So that also goes back to this Darwinian argument that there is something in us that if we don't optimize for survival, we don't survive. When you could say, well, maybe that's okay, then you just die, but we've managed to, to stick around for so long. Well, why do we, why do we, why is it, why is it that in, you know, in, in, in, in advanced countries, we make accommodations for people with disabilities and so on. Like the argument that you're making seems to be that it's, you know, it's, it's rather close to eugenic kinds of arguments, you know, as well. And I'm wondering, I'm wondering what role that, you know, why we would bother, right, to keep people alive who are, who are no longer of breeding age or who have genetic, you know, problems, you know, which many of us do, right, or, or who are disabled. Breeding, what's the point? Breeding, breeding isn't the only thing that helps you in a Darwinian sense, right? So that's obviously something you want to, you want to, that's part of this on a biological level. But there is superpowers in people's brains. And we know this from lots of physicists, especially that seem to have a big group of people who are disabled, or maybe then the general population who are extremely, they have extreme gifts to humanity, and you enable them by having them being able to input to society. And I don't think I don't, I don't think, I don't think that most disabled people are Stephen Hawkins. But no, probably not. But they cut, you never know. And there's, you know, any disability that we call disability might be a great ability in another window that we don't see, right? So I feel like you're trying, I feel like you're trying to look at everything through a lens of utility. And that really is what I'm pushing back against. I think that, I think that this framework of utility is very limiting and, and kind of when we start to really pursue it causes us to contort ourselves in all sorts of rather, rather odd ways. I just, you know, I mean, whether we are choosing this individually, choosing it as a society, I mean, those are interesting questions. And they, you know, and I don't think there's actually a simple answer, you know, to where agency lies in any of this. But, but what does seem, you know, pretty clear to me from looking at, at nature as a whole, not just, not just humans role in it, but, you know, all kinds of animals and plants and so on, is that this idea that, you know, that everything is just in competition with everything else and everything is trying to, is trying, you know, is trying to just survive at all costs and, you know, and, and if everything else dies, that's, that's favorable to it, you know, because it creates more space in the ecosystem or whatever. I just, I just think that's not, that's not how it works. I feel like there are economic arguments against this, there are ecological arguments against this, there are mathematical arguments against this, there are empirical arguments against it. Even Darwin realized it. I think that, I think that the, you know, this highly utilitarian, you know, kind of optimization based approach came more from Spencer than it did from, from Darwin. And in fact, some of the Russian thinkers, I'm thinking, I'm thinking of, of, I'm thinking of Kropotkin, of Peter, Peter Kropotkin wrote about, about sort of a more cooperation minded view of evolution. You know, so what does, what does Darwin look like without, without, without Malthus, without, without Spencer? And, you know, the answer is not that there is no such thing as competition. Of course there is competition, but I think what we fail to notice is that competition and cooperation are very, very close and in some sense almost indistinguishable when you look at them from a mathematical point of view. And emergence of complex, of complexity comes from that dance. I think we absolutely agree on this. I had a similar debate with Simon Anhall that I really, really enjoyed. And we talked