0. Introduction
Reid Hoffman is the greatest social technologist of our era. His first company, SocialNet, was a social media play. He then built LinkedIn and PayPal and invested in Facebook and Airbnb. Reid argues that just like the internet, the true killer app of AI isn’t going to be single-player chatbots, but multiplayer social.
This interview is going to explore the full scope of AI social, from how it will transform traditional social media to what friendships and romantic relationships with AIs could look like.
As someone who is quite critical of social media, I came into this interview dreading a future where human relationships are replaced by non-human agents. But Reid gave a compelling account for why this technology has the potential to make us more and not less human.
1. AI Social
Johnathan Bi: What is the social killer app for AI?
Reid Hoffman: People always tend to think, because of chatbots, of AI as one to one interaction. So it's like me with my chatbot, like I ask it a question. It's kind of like Google search, et cetera. And actually, in fact, one of the things that's going to happen within a small number of years is we are going to be in a surrounding field of agents. We're going to have agents listening to us. And like, for example, when we have a conversation like this, we'll have agents that are going, oh wait, Reid, when you made that comment about Rousseau, that wasn't quite right, you know, and we'll kind of blink and say, hey, do you want to interact with me on this? And so forth.
And so we'll have agents kind of in the field around us, not just for us as individuals, but for us interacting with other individuals, but also interacting with groups, interacting with societies. It'll take some of the things that are currently more invisible to most people, which is the networks we live in. And all of that kind of social interaction will now have a mediated kind of field of agents. What the exact shape or topology of it is, It's almost like complexity theory. We can't fully predict.
Johnathan Bi: Right. That was an interesting answer, but a very different answer than what I thought you'd give, because your answer, to summarize for audience, is that it will mediate our social interactions in the same way that LinkedIn mediates and our social intentions today. I thought what you were going to say is that we are going to have perhaps direct social relations with the agents themselves.
Reid Hoffman: Yeah.
Johnathan Bi: That is how I read at least your attempt at Pi.
Reid Hoffman: Yes.
Johnathan Bi: And building the company inflection. Right. Which you. You said the difference is that Pi is trained on EQ as much as IQ.
Reid Hoffman: Right.
Johnathan Bi: So tell us about that.
Reid Hoffman: So we will have direct social relations with agents. But actually, in fact, part of the reason I gave the other answer is because the anthropomorphisation of agents is one of the things we have to be careful about. And Pi is a perfect example of it. You train it for kindness, compassion. We train it for helping you and being your companion in doing that. That's important. But, for example, if you go to Pi and say, you're my best friend, Pi will say, no, no, no, I'm your AI companion. Let's talk about your friends. Have you seen your friends recently? Maybe you want to schedule something so it doesn't want to displace your human relationships. It wants to be in the panoply of kind of social interactions you're having.
By the way, we’re going to have to have a new kind of social vocabulary, because their social engagement with agents is different than your social interactions with your friends, your colleagues, et cetera, and even like therapists or doctors, it's a different interaction. So, for example, one of the things we have to be careful about is if people are interacting with agents. Like, for example, the most classic one is Alexa. And it goes, no, stop. Well, you don't want to start interacting with other human beings. No, stop. He's like, no, no, no, that's not the dynamic we have. So we're going to have to have a more rich kind of socialization thing.
So, for example, part of the thing that you're going to want with agents that are interacting with children is you're going to want them to be attentive to socialization. So you're not going to want them to teach them to be rude or aggressive or preemptive. Or, since we're talking philosophy, like the master in the Hegelian master-slave dialectic, you don't want to train that way. But by the way, the social interaction a child's going to have with an agent is different than the social interaction they're going to have with their parents or other kids. And so this richening of social experience is going to be important. And Pi is like one specific cast right now with today's technology to try to put us on the right path.
Johnathan Bi: Right. So, again, I'm somewhat surprised by your answer, but in a way that I'm relieved because the worst answer you could have given is, yeah, like in the future, your friends. Yeah, exactly. But this is why I'm surprised.
Okay, so I'll give you a quote from your new book, Superagency:
Billions of people say they have a personal relationship with God or other religious deities, most of whom are envisioned as superintelligences whose powers of perception and habits of mind are not fully discernible to us mortals. Billions of people forge some of their most meaningful relationships with dogs, cats, and other animals that have a relatively limited range of communicative powers. Children do this with dolls, stuffed animals, and imaginary friends. That we might be quick to develop deep and lasting bonds with intelligences that are just as expressive and responsive as we are seems inevitable, a sign of human nature more than technological overreach.
(Reid Hoffman, Superagency)
When I read that paragraph, I thought the natural conclusion was to say, look, we form strong relationships, sometimes stronger than humans, with imaginary non human entities all the time. And so AI agents. Shouldn't AI agents be even more natural?
Reid Hoffman: So the point of that paragraph is a lot of people have this reaction, like, if you go to most of the dialogue around AI right now, it's uncertainty, fear, negativity. And one of the pieces of negativity is, oh, no, it's destroying our human relationships because now suddenly we'll start forming these human relationships with this other thing. And the point is to say, actually, in fact, we as human beings anthropomorphize. Broadly, we anthropomorphize a car. Like my mom calls your car by name and says they're there. Pets and a whole bunch, So we have this whole range.
And that's the point to say, no, no, no, this is not this alarming new thing that is the collapse of human relationships, the collapse of human society, the collapse of human agency. This is just another entry into this panoply of relations we have. It wasn't meant to generalize to, well, therefore everything we interact with is how we interact with humans. That's unsophisticated.
And by the way, that's a danger because people can have unsophisticated relationships. I mean, for example, one of my aphorisms that I tend to challenge is dogs are man's best friend is like, no, a dog's not your friend. Simple way is you wouldn't say, oh, I'm taking my friend down to get spayed at the moment involuntarily on their part. It's like, no, that's not the way friendship works. You have a wonderful relationship with your dog. Right, but it's not your friend.
And so that kind of thing of this sophistication of relationships is one of the things we have to continue to need to evolve. And part of the point there is that's not alarming that we're going to further elaborate the sophistication, our relationships now with the most interesting technological thing we've created in human history.
Johnathan Bi: Right. I see another way to put it is the mere fact that we're forming some kind of sociality is not a cause for alarm in the way that all of these is not a cause for alarm.
Reid Hoffman: Yes.
Johnathan Bi: But as all these examples go to show, it can become pathological.
Reid Hoffman: Yes.
Johnathan Bi: If you're 18 and you still have stuffed animals as imaginary friends. Right. Or there's ways to do religions in a very destructive way in a very constructive and nourishing way. So that's what you're trying to get at, which is that we, these agents are going to enter into our social world in some way, but there's a way to get it right. That's what you're trying to get.
Reid Hoffman: And we should pay attention to it and steer it right. Because that's part of the reason why a lot of the stuff we were doing with inflection and pies, our kind of first cast mode to try to catalyze other agents. And like, for example, we've seen Anthropic's Claude now do a lot more EQ things. I think that's a really good thing. So being EQ is good, but like to play the right role in individual Human lives and humanity's lives. And that is not social, but it is also not like you and I placement.
Johnathan Bi: Right. So tell us about how you trained EQ into this model.
Keep reading with a 7-day free trial
Subscribe to Johnathan Bi to keep reading this post and get 7 days of free access to the full post archives.