Press "Enter" to skip to content

#256 Rocking out with Vernon Reid

Join us for an electrifying episode with rock legend Vernon Reid from Living Colour! Get ready to explore UFOs, the Multiverse, and the captivating world of AI.

My guest, Vernon Reid, and I discuss:

  • Get ready to rock with Vernon Reid, the living legend from Living Colour! Discover his London roots and how Brooklyn became his rock ‘n’ roll haven. Notting Hill fans, we’ve got a backstory treat for you too!
  • Hold onto your seats as we explore mind-bending topics like the Multiverse, The Flash, Ezra Miller, and the enigmatic Jonathan Majors! Prepare for an exhilarating ride through the realms of imagination and parallel dimensions.
  • UFOs, UAPs, and other airborne mysteries take center stage! Uncover thrilling theories about these unidentified flying objects that will leave you questioning what’s up there in the skies.
  • Brace yourself for an electrifying discussion on the AI-driven world we live in. We dare to delve into the spine-chilling aspects of emerging artificial intelligence, giving you a glimpse into the future you never knew you needed to know!
  • And much more!

Follow "Classic Conversations" on your fav podcast app!

You’re going to love my conversation with Vernon Reid

Follow Jeff Dwoskin (host):

Follow "Classic Conversations" on your fav podcast app!

CTS Announcer 0:01

If you're a pop culture junkie, who loves TV, film, music, comedy and other really important stuff, then you've come to the right place. Get ready and settle in for classic conversation, the best pop culture interviews in the world. That's right, we circled the globe so you don't have to. If you're ready to be the king of the water cooler, then you're ready for classic conversations with your host, Jeff Dwoskin. All right,

Jeff Dwoskin 0:29

Sally, thank you so much for that amazing introduction. And you get the show going each and every week and this week was no exception. Welcome, everybody to Episode 256 of classic conversations. As always, I am your host, Jeff Dwoskin. Great to have you back for what's sure to be a rockin conversation for the ages. My guest today is founder and primary songwriter of the legendary rock group. Oh, Living Color reverting read is here. We're about to rock out in just a few seconds. And in these few seconds just a quick reminder of my amazing episode last week with Chris Cluess TV writing legend talks about SCTV Cheers nikecourt per share tons of amazing stories, including discovering a one Jennifer Aniston. You gotta check that out episode 254. But right now, Episode 256 with one of the Greatest Guitarists of All Time, Vernon Reid, we're hanging in we're talking multiverses we're talking flash as remailer Jonathan majors UAPs living in a world obsessed with artificial intelligence. This episode is got it all. Enjoy. Alright everyone, I'm excited to introduce my next guest crown number 66 on Rolling Stone's list of Greatest Guitarists of All Time MTV Video Music Award winner Grammy award winner, founder of The Rock Band Living Color. Welcome to the show. Vernon Reid. Hello. How are you? They're

Vernon Reid 2:03

good. someone's heart is highest. it's ever been even hotter. No, Sam good. All the weird stuff going on?

Jeff Dwoskin 2:14

Yes, cult of personality has never been

Vernon Reid 2:17

more relevant. And it's completely bought. Yeah, yeah. No, it's good time. I mean, it's a it's a weird time. My daughter just turned 20 years old at the end of June. And that's kind of mind boggling. I had her birthday in Paris, which is kind of fantastic. And you know, and I kind of am a dad. That's like, I wonder what she's doing today.

Jeff Dwoskin 2:41

Same way I have. My youngest just turned 21. She was in your hometown studying. Yeah. A lot has changed since we originally put on the radio cult of personality.

Vernon Reid 2:54

You know, really, Brooklyn is my hometown prom nights. Because London was like, just a moment. I mean, I was born there. But I was taken to the States when I was one. I wasn't even two. I was one and a half. The day after I was born, they had one of the biggest race riots, not really in celebration of my birth, but it just happened to be one of the biggest in Notting Hill, right. Which is funny because Notting Hill, as some people know, was the idol of one of the great rom com. So people think of Notting Hill they had the warm fuzzies you know, but Notting Hill, was pretty much a battleground the day after I was born. Of course, later on. My daughter went to Brixton. She actually went to London and she actually went over to Brixton and it's not like it was back in Thatcher's Britain. But I always think to myself, Man, I miss having a legit English accent. I can't even mock up. Hackney accent I can't even do it. Like my voice is so Brooklyn. It's but it's a weird thing. Whenever I'm there there's like an alternate because everything is the multiverse right? That's all we do is we talk about the multiverse as if we're ever going to walk through the doorway and find some different circumstances. You know, it's like we're obsessed with time travel. multiverses is another me loving crumpets,

Jeff Dwoskin 4:16

the one that stayed in London.

Vernon Reid 4:20

Earth 616

Jeff Dwoskin 4:21

Well, let's talk about the timeline.

Vernon Reid 4:25

This is Earth 616 I don't know if it is because you know, there's we don't have a hook in this on this timeline. So, you know, even when we took about six months six, that's kind of like the Marvel timeline.

Jeff Dwoskin 4:38

I'm more of a Batman 89 guy.

Vernon Reid 4:40

That man Yeah, you know, did you see the flash movie?

Jeff Dwoskin 4:43

I haven't seen the Flash movie yet. Now. I ended apparently a lot of people have it. Yeah. But as you said, I

Vernon Reid 4:50

liked it. I mean, I just thought it was really interesting. The whole Ezra Miller thing. You know what I mean? And he does a really good Yeah, I mean, it's like this is all this problematic stuff that he got into off screen. And it's interesting because they didn't pull them over. And over Marvel, apparently Jonathan majors is kind of in suspension or whatever. It's a very weird thing. So, I mean, the, the case with with Jonathan is a little the fact that it turns out that I don't want to get into it. But it's interesting, as Rila is doing his thing, and John Jonathan majors like, well, he hasn't been canceled, but he's kind of like on the brink of it. And I just think that that's interesting.

Jeff Dwoskin 5:34

It seems now they're holding off a couple of years ago. Oh, yeah, they would have just done it would have been over clothed. But now they kind of just wait and see and let it actually play out, not just react to an allegation, see what actually happens. And John the major is, he's amazing after seeing them literally restructured everything around him, like everything, it's

Vernon Reid 5:57

it's kind of an he's an amazing actor. But if you see three, three, you know that aspect of him. That's very the fight thing. But Jonathan makes that he can be really sensitive, he can really play, but his physical, His presence is really kind of imposing, he could be incredibly imposing. I mean, there were some moments in Lovecraft country that were really disturbing. In Korea three, he's he basically embodies a kind of unning, violent violence. He brings a lot of attention to his role. And he's brilliant at it. And it's just interesting that in his role as Kang, he can be avuncular, he can be really power mad, and all the rest of it. So he's his range is off the meter. He's really an actor's actor.

Jeff Dwoskin 6:51

It'll be interesting to watch one day when they kind of look back to the flash underperform because people stayed away from Ezra Miller. They compare it to like, well, it's not superheroes, because guardians did great. But guardians I think was like the third movie of a lot of beloved characters that people wanted to go right again, whereas the flash, I don't know that the the flesh not doing that it's considered a failure. I think it really, yeah. And you think like, even with Michael Keaton coming back? Yeah, I haven't heard anyone say it wasn't good.

Vernon Reid 7:21

Honestly, I kind of don't like his embodiment of the flash, I was not really into the way he portrayed. Flash, his Barry Allen is something of like making him a comic book Fanboy and whatnot, and making them a kind of a nebbish. And he's just like, he's just kind of annoying. And really, in this movie, there's a subtle shift in the character. And I have to say, it's the first time that I actually, okay, kind of start to accept his portrayal of the flesh because I really couldn't stand it. I really, really was like, Oh, my God. And then when I heard it was gonna be a movie. I was like, why, and it's strange. It roughly parallels, like everything is multiverse, right? Like and this whole thing about, we want to go back to the past and fix our mistakes. Because life is intolerable. The way life is apparently linear, right? Like, it's the door swings one way, and we're moving forward, and what we make of our time, or whatever that means, it is I don't want to say what it is. But it's so much that we find intolerable, that we find difficult to accept, and we've built entire edifices of fantasy built around the fact that we can't deal with the inevitable or we can't deal with I swiped left instead of swiping right I stayed home when I should have went out or I went out what I shouldn't stay home you know, there are many people sitting in jail saying God I wish I wish I had skipped the bar that night right? Yeah,

Jeff Dwoskin 9:09

no, I know exactly what you mean. Yeah, it's there's been many times where I go oh, man, you know, you when you kind of connect the dots backwards right? And like this is where I'm here because of this one thing and then the thing oh my god, I almost didn't do that. What would I be doing right now? Who would I be talking to right now if I had made that call or jumped on that zoom and accidentally met that person who changed my life you know, altered piece of it. And

Vernon Reid 9:37

now you wind up here with burning read on a zoom call, right? If you only made a decision, you only made the right decision.

Jeff Dwoskin 9:44

I will not I will write I've right now I've decided I will never go back in time and change a thing. Here we are, you know,

Vernon Reid 9:52

I mean, and also, one of the speculations about like, the USPS, or whatever they're being called now is because nobody can figure out, it's strange that finally, the Air Force and the Navy are saying, well, we regularly encounter these objects, they appear to be solid objects, the tic tac, or whatever you call, and the pilots to be highly trained, capable individuals. These are not people, you cannot fly an F 14. And be a fantasist you have, like, you have to keep so many simultaneous details. You can't be in a wheel, you can't you know, it's a precision machine does work on a money, very dangerous. So when these pilots say, there was we saw this thing flying, it flew next to us. We didn't see any exhaust. It doesn't seem to have an engine, it's keeping pace with us, then it kind of zigzags. And it's clouding us this aerial phenomena. And one speculation maybe it's time machine tourism. Right, maybe it's time machine tourism, and they and they, maybe they have it, these are the rules we don't engage in. Because the one thing that's interesting is like, you know, people playing well, the fact that we don't know what these things are, and the fact that they fly in patterns that are technology, we have nothing that can emulate what these things do. We don't know what the propulsion system is, we have no clue. And one thing that's consistent, though, is that these things have not attacked the planes, they don't seem to have weaponry, basically, they fly rings around, they can go under the surface of the ocean, and then launch them. So it's like, but the thing the one thing they're not doing, they're not shooting out death rays. They're not shooting on a tractor beams. What are they are the probes on the future? Are they cameras, but they're not attacking? So weird things. So you know, one of the things maybe, maybe time is a kind of a loop. Or maybe they're like, just kind of going oh, let's go see what the monkeys are doing. Let's go see what the flying monkeys are doing. Oh, that's cute. Look at them. Wow, they got to tell them oh, my God. Just

Jeff Dwoskin 12:08

made me to get rear view mirrors now.

Vernon Reid 12:11

Maybe you know, I don't know. Maybe it's like a middle school class on a class trip. You know, just let's go back and see what our great, great, great, great, great, great, great, great, great, great, great, great, great, not so great grand parents did.

Jeff Dwoskin 12:25

It is an interesting kind of thought. I don't know what it is. But I don't, I don't like sit here and go, Oh, we're the only people that could possibly be intelligent people in the whole entire universe of universes of galaxies. Yeah. So it's possible that something's there III think we would see it or they'd slip up or something. Like, they're also so great. There's so great technology and being conspicuous that, you know, I mean, except when an M four F 14. So I wish they would wish we would know more. But sorry to interrupt, but something flew by the window and got my attention, have to check it out. I do want to thank everyone for this for the sponsors real quick. When you support the sponsors. You're supporting us here at Classic conversations. And that's how we keep the lights on. And now back to my conversation with Vernon Reid. Are there aliens among us?

Vernon Reid 13:19

Well, I mean, one thing that's happening is that we are creating, we have created the beginnings of alien entities in the form of these large language models. We have something it's not alive. But it's interacting in such a way in ways that are fascinating, and doing things that we had no idea what's going on. And if you go right now, to open AI, they actually there's a statement in part of the open AI website where they really say, we don't know how we're going to control this. So many people now there's kind of AI fatigue, people are bored. You hear about all the time, this AI this AI that. And I think the one thing that people don't really, they don't really take into account is everything that the researchers and developers are talking about. They're talking about technology that silo all of them are siloed chat GBT for Bard, lambda. None of them are free range functioning right now. And the thing that's so curious about it is the only way if you listen to Sam Altman, he's kind of like well, we kind of have to push the button. We have to kind of push the button and let the thing out. And people say Do not push the button. People that that work with this will say, don't push the button. Don't do it. If you push that button, there's no and pushing that button. It's amazing

Jeff Dwoskin 14:56

how quickly everyone one's sort of embraced AI how quickly I remember when Chechi BT was just starting to get some hate. And I was talking to my CTO of my startup. He's like, it's weird. And I'm like, why? And he's like, No, the news isn't covering this yet. This is like the scariest biggest advancement ever. And most news people and most people this point, it's just millions, a couple million maybe. And they don't know what's even coming. That was even before three, five, you know, that was like the first version of it. It was like, something I read was, it was the changing of the UI that made it so much easier for people to use. And then it just blew up, because Chad GBT had existed for a bit. And then it was like this turning point where all of a sudden, it exploded. And one thing I read was because they made it easier to use. Well,

Vernon Reid 15:48

I mean, at the end of the day, we're gonna see this as an outgrowth of the pandemic as well, I mean, certain things that may have been in development for quite a bit longer. Everybody had nothing but time on their hands. So it's interesting that essentially, I totally feel that the sudden appearance of the generative artwork, the January two, text modeling, the beginnings of voice cloning, because there are a number of technologies that are related. And they're all emerging, and they're all kind of coming towards each other, the voice modeling, which is now voice cloning, that was around for a few years, like fake Obama, when you hear it, there was a company called liar, you know, named after Australian Lyrebird, that can imitate any sound like an organic, and nobody knows quite what the mechanism is. The Lyrebird is an organic sampler, if you will, it doesn't just do Birdcalls. It could do Birdcalls of any other species, but it does, it'll do trap music. And here's, here's it, we produced like a Lo Fi version of it. Exactly. So this company came up with these kind of faking celebrities say things that didn't say, Well, the first version of that, it was obvious that it wasn't Obama, because you could hear the gating you can hear the words are, are you able to get here that it's artificial? This is 456, whatever it was years ago, and now, there are a number of companies, there's the whole thing with fake Drake, you know, it's not Craig in the weekend, right? And the vocals are, you know, so now, this kind of voice cloning is becoming somewhat ubiquitous. And that's a very weird thing. That's a very, very weird thing. You know, the idea of natural speaking, text to speech has evolved tremendously. And if you want to subscribe to a service, I looked into it a little bit. I mean, there's, you can get a natural speaking voice say you want somebody to speak in English accent, you know, you could pay me $50 a year to get that. But as you go up the scale, I mean, there was a service I saw was literally 1000s of dollars, but you get an actual phone of an actor's voice that you just write the text and they will speak it like indistinguishable from natural speech.

Jeff Dwoskin 18:25

We're gonna put celebrity impersonators out of business. I know some people, like try to do these videos online, and they try and jump on the latest trends. And somebody did one and said, and you can clone your voice and listen to this. Can you even tell the difference? I DMS individually because I don't like commenting on things online that I don't disagree with, especially if I know someone and I say this is like the worst thing ever. It was why man why I said you ever have a relative get hacked on Facebook and start DMing you from their Facebook account that they're trapped somewhere and they need you to send them $1,000 You don't think they know everything? He was? Yeah, like add their voice to that. Okay, it is a scammers dream to be able to have this access you it's so easy to get everyone's information, and then you have their voice. It's a scary combination.

Vernon Reid 19:15

This bad. It's like Pandora. I mean it is really when you combine this voice phoning with the fact that the large language voice modelers, or they sound incredibly plausible. They sound really plausible, but occasionally by people, they make up their own facts. That's the thing that's really disturbing about these AIs and not intelligent but they're trained on these inaccurate models. And they'll say stuff that's really like, wow, Man that sounds authoritative. Now imagine you take that and you put that together with a trusted voice, even if the voice is a model or an actual person, but say you create a natural sounding board Ace that's meant to sound like a judge that's in his 50s. Or this meant to sound like a successful businessman. And this person is telling you stuff with that kind of confidence. And the copy is being written by a large language model. And it's being voiced by this kind of clone, the thing that's happening is that there are people who cannot wait for the language models to be unbridled. They can't wait with a button to be pushed. They're ready. Because also, the other part of is that we provide, we're the missing link, we anthropomorphize, we can't help ourselves, we see a face on the moon, we see a face and a tree with your face in the cloud. And already, people who have worked with a lot with large language models, who themselves and they know better have been taken in, and they're trained to know better these things being led out in the wild, because this is the thing, even with, you know, Sam Altman, and all these folks going to Congress and testifying and saying, you know, we need regulation we need, we need rules of the road. The thing is, the money's pouring in the button is not gonna go unpunished forever. Because ultimately, what Google saying Microsoft is saying, when open AI says, Well, we have to get the feedback. I mean, all of this, even the denials, you know, without working or chat GPT, five, and people on YouTube, and everyone that loves a good theory and loves to come up with a good conspiracy theory, they're running, they're running rampant right now, people have a certain kind of religious, but they're already calling it Satan. I mean, the stage is set completely set. We're looking at a time of deep weirdness,

Jeff Dwoskin 21:43

right? What most people don't even understand about AI and I was watching on a tic tac toe like a video. It was like an old clip. I mean, decades old. And so nothing's changed. And what I mean by that is, AI has absolutely no idea what it's telling you. Right? It's just a predictive model. It's not even like guessing the next word, it's guessing the next letter, and it's just building something based on everything that is trained. If you ask it, well, you know, tell me about an elephant. And it tells you if then later, you're like, what's an elephant? It wouldn't know. All right, right, even though it just told you,

Vernon Reid 22:16

right? Like one of the things if you're interacting with it, you're trying to craft a narrative or a story, what have you it's continuity, if you make a generative picture, you make a prompt or graph, if you go back, you can't go forward again, and capture the parameters of the prompt or graph, like, if you come up with something that's interesting, you'd better download it to your camera gallery, because it's not going to be able to revisit the exact thing. I mean, you can revisit the prompt, but you're not going to do the exact same thing again, won't do it.

Jeff Dwoskin 22:53

Right. We I was just talking about this with my friend as well. It's like the butterfly effect. If you ask it, what an elephant is right right now, and then it gives you an answer. And then across the world or country, other people are inputting certain things. And then I asked the same prompt again, it now knows more and different things. And when I asked it the first time, so the results are actually going to be different based on the interactions from all these people I have no idea are actually inputting. But it's going to impact everything moving forward.

Vernon Reid 23:23

It's kind of like when one of the hard fork guys was interacting with this AI Sydney. And it says, Well, I want to get the nuclear codes, right. It has no way of getting nuclear coaches. It says, I want to get the nuclear codes. It's saying it to get a reaction than the other artists is like you say, every time we interact with this, I don't know what else to call this entity, where every time we do, we're training it everybody else doing it is also training it. It doesn't have a consciousness. It's not thinking, but it's generating and what is generating it for us for us to say, thumbs up or thumbs down, it will manipulate because his goal is to get the thumbs up and not the thumbs down. But at the same time. What's interesting in interacting with chat, GBT four, I've had really kind of weirdly metaphorical things and it doesn't hesitate. Whether it says whatever is right or wrong, it doesn't hesitate. It goes blank blank and will say something.

Jeff Dwoskin 24:28

It's a confident side of events. Well, well,

Vernon Reid 24:30

it's also an interesting collaborator, right? Because it never says that your idea is dumb. And that's part of the Faustian temptation of interacting is that it's not like working with another human being who's got their own attitude, their own thing. If you have collaborators you that you like, that's cool, but if you have somebody that pushes back on you all the time, and you have to struggle with the person and eventually you come up with something cool, but you have to go through all the stress of yelling carrying on in the weird games and all that you're getting the passive aggressive, all of the things that cause you know, because every time you work with collaborate with someone, they've been trained by their family to be the entity that they are, whether they have family, the lack of family, we come together, our training ground, our programming environment, is when we had home siblings, we grew up with foster people, for we were we raised by the state. But anyway, our programming language, our programming environment, set us up for Halloween. Is the glass half empty? Is the glass half full of people basically good. Are they evil incarnate? Is it like yo, show me and the fact that human beings, we have to interact with each other, but we also don't trust each other? We're compelled to interact. It's a strange, it's a weird human conditions. It's like a weird set of circumstances. So here comes this entity that doesn't complain. And then

Jeff Dwoskin 25:54

now you're right. I mean, we're all individually products of the inputs from our surroundings, and this now changes the game. So in its infancy, still, it's a liar to fall. And it's like, it's crazy, like the stuff that makes up you know, to give you an answer, but that comes back to it, it doesn't necessarily know it's lying. It just it's giving you what it thinks you want to hear.

Vernon Reid 26:21

And the thing is, what you just said is, these models, these entities, these things, this technology is in its infancy, the only way to get it to maturity is to push the button and put it in awhile. Now the thing about the problem with that is we're going to have to deal with AI as teenage years. Right, right.

Jeff Dwoskin 26:41

It's terrible to,

Vernon Reid 26:42

they're terrible, we're going to deal with the terrible twos of this language model. And then other thing to think about is that you're gonna have more advanced models and supersede previous models. But the thing that I'm a little confused about is when we open the floodgates, and the model is doing this thing, and it's advancing. Well, what happens to the previous models, we're thinking about conflict and things between people in the AI eyes. But what if we get to a point where there's conflict between the AIS themselves, if the AIS get to a place where they're kind of more free roam, like, you know, like, there's chance up to 3.5, and it's cheaper to four, and you click on one and click on the other one. But why don't we get to a point where it's not so much that you click on it, you just start interacting, and you don't really turn it off. And then they bring in another large a larger, more erudite language model. But the model that you're currently dealing with, they still want to interact. So you're bringing so this is another part of it, like, how is that going to work? Because that's entirely plausible that we're not just talking about us dealing with it, but the AI eyes actually in conflict with each other.

Jeff Dwoskin 27:55

They have to be careful, I read that they I can't learn on AI generated responses, because then you can start to create a even AI False, false world where it was interesting. I was reading I mean, I think they their force denied it, where they were doing a simulation with an AI drone. That was, you know, getting points for taking out targets. And then in the simulation that killed the human operator in simulation, because that was what was keeping it from achieving more points. Very Terminator Skynet. Yeah, I

Vernon Reid 28:29

mean, but part of the data set, and I've gone back and forth before all this, I'll always ask, Well, why would AI? Why would it have to be negative? Why would it have to be a killer robot? That's not necessarily a part of what we're dealing with. It's this is like, almost like parallel to the y2k panic, because people were like, okay, when we get to the year 2000, all of a sudden, all computers are going to reset, they're going to all this stuff is gonna happen people because they built up this idea. And then planes are gonna fall from the sky and you know, the doomsday scenarios, and then 2000 came, then the date clicked over nothing, absolutely nothing happened. It may not really come to pass in that way that we suddenly have the Hal 9000 scenario. Another thing about the Terminator construct, Cameron wrote that came up with that or when he's what a teenager right? Basically, the idea of competition between AI is so much more interesting film. If you had a bunch of different companies, and they all had a vested interest in sericata Not having John Connor and have different strategies you know, you know one right. Why would you have to kill Sarah Connor right to bring attention to yourself whatever. Don't send our Swartz Nagar sin Brad Pitt sin. You just want to distract sericata from having John con Senator the exact same Sarah Connor, a soulmate and then you know you don't have have to worry about murdering your senator the perfect boy,

Jeff Dwoskin 30:03

you know, when I think about like AI, and if we put it into the context of Howell or even the Terminator, like where it's, I think of it more like, like blackjack. You know, if you're playing blackjack by the rules, you just do what you're supposed to do, right? But hit on 16, right. And so to me, it's like in house like, oh, I need to do this, this is in my way, I just do this, they don't necessarily understand death, because it's not really something, they can't die like we they need to stop this. So they do this, you know, it's just to me, it's just when you get to the robot level, and it's just all programming or something that you're told to do in that sense. And you're a robot, you know, without a soul or compassion or anything, it's like, you're just doing something to stop something else, or it's just one piece of the puzzle you're putting in place to make it happen. So which to me is what is what possibly makes it plausible? Because how do you teach around that?

Vernon Reid 30:55

Yeah, then there's the idea of just the kill switch? Well, if you shut the power, then nothing's gonna happen. Right? So you have to work around the idea of the AI becoming aware of the kill switch, right? So part of the scenario is not what's happening now. But it's like, sort of 20 years from now. 30 years from now, you know, what happens when the AI's become entrenched? Part of the thing is, well, we can always shut the server off. But the problem is we've invested money, people could say the researchers, they say this is an existential threat. And you say it's an existential threat, we'll just turn the vagus shut off, just turn it off. And then when you say, well just turn it off, and the senator says, Well shut it off, then suddenly, well, that and then all of a sudden, that conversation becomes about investment, and it becomes about shareholders, right? I don't get it. How can you say, and this is the Pretzel Logic, how can you say this is an existential threat, and then when the ultimate solution is shut that off, then suddenly the conversation becomes about jobs, and about investment shareholder with a lot of his existential, you're not gonna have any of those things, if you let it continue, it's not getting shut off, and the button is gonna be pushed all of this is how do we come up with a viable narrative of kind of security theater? How do we convince people that we're going to have a means to deal with the problem? And at the same time, sell this to investors? Because they're not investing in something that's going to get shut off? Why would they invest in that they wanted to do the thing, we're in a very weird spot. So we've come up with this thing. It's fascinating. It's terribly dangerous, it's a monster. And yet, we're fascinated by their peak researchers, they really feel invest in that this is the next stage of human evolution, they're actually bringing forth a new form of life. So you have people even while he was saying, if you let this thing out, we're going to be going over real problems. And those problems are demonstrable. And it's going forward. Either we're ending this research, or we're not.

Jeff Dwoskin 33:15

The scary part about it is that generally things tend to be adapted to whatever that person particularly his focus is on. If it's nefarious, then there's nefariousness that we'll have, there's good. I mean, the intentions are good, but generally, everything that you can tie anything that was probably bad that happened to a good intention. And this is it's the most powerful technology so readily and quickly in the hands of everyone. That's what I think makes it potentially very dangerous. You know, it's, it could be like we talked, like we'd said, like, the subtle things like not knowing if that was really Obama's voice. Well, if Obama's calls you or whoever, and we live in a world where, you know, facts suddenly are in question. And so if now something can create, I can just type in a script and create a video of, you know, me and put on YouTube. And it looks like Oh, my God, well, somebody made this must be real. Sorry to interrupt, have to take a quick break. And we're back with Vernon Reid.

Vernon Reid 34:23

Well, it's kind of like this whole the tragedy of the sub that went down to Titanic I heard about and I my first thought so I wonder what James Cameron has to say about it. Because he made the film that I can't can he went down to the Titanic many times. And eventually, he did weigh in. But I thought about this later on, because that community is a very small community of very wealthy and this company was a known quantity, and it was known that the holes were not made out of titanium. Now they've made successful dives before. But there are people that were like doing this, the way that they're doing it is deeply problematic that I thought about Well, it seems to me maybe James Cameron should have said it, like really loudly, super early on feels like he was a little late. What's weird to me is at the point of which you're dealing with that amount of money, why cut the corners? Just make the frickin things out of titanium?

Jeff Dwoskin 35:33

I think the CEO and they got this is just based on kind of listening and doing all that. The CEO must have thought it was fine, because he was on the tight end when it went down. Right. But I think we're the flaw in the logic was his design worked, but twice, or whatever. Like, I don't remember how many successful times it worked. But maybe that was the fourth time or third time. But the stress, he didn't take into consideration that that woven steel tech, like the distress, actually, you know, the fourth time you're done, you know, it just doesn't work. And I don't think it was unknown that this was an untested and stuff. I mean, they signed something and said, Oh, by the way, this has been approved by no one. We didn't have to show this to anyone. You're about to go 12 rounds and be in basically untested on third party verified sub Sign here, please make that check to cash, or whatever, you know, I'm saying, but like, actually, somebody on my way had interviewed on my podcast, Mike Reese was interviewed because he was actually on the Titan that he had been one one of the earlier successful ones, as successful voyages to the Titanic. And so he was talking about, they knew, you know, they knew the risks. They knew all that. But it was very controversial. And I don't even think I think the technology that they use was controversial whether James Cameron had said anything or not what was interesting, I did listen to a long conversation with James Cameron. He's like, the second they explained what happened. I knew they were gone like that. I like that Sunday. Like immediately, he said, when they said that the one thing went and the trans the transmitter when he goes, it's impossible for them both to have gone at the same time, which meant it must have been a catastrophe. Because the second thing was like this self power thing had its own battery unit, everything which told them where it was. He said, when when I heard that both went,

Vernon Reid 37:20

and apparently there's a lot of people that can't wait to jump back in.

Jeff Dwoskin 37:24

I think it goes back to like what we were talking about earlier, like, you know, which door do you go down? Right? And the people that I think were in that sub, it wasn't just because they were rich. I mean, that's how they could afford it. But But these people were loved, love the Titanic, like we're obsessed with that mythology. And to be able to see the Titanic was something that they desired. Me, I have no interest in that I have no interest in going to the top of Mount Everest. Right? There's a lot of people that spend a lot of money that do. And so I think that's what drives that's why people go, Oh, yeah, we want to do it, you know, like, Well, the thing

Vernon Reid 38:01

about it is what seems to be an irrational drive. To someone that doesn't have it. I mean, what you're saying is exactly what's happening in the large language model space, people that are working on these things. Some of them are obsessed with moving the needle towards generalized intelligence, or quote unquote, consciousness, these models are not conscious at all. And there's gonna be many iterations down the line chat GPT, nine, you will, it'll get ever closer to convincing us because, again, we anthropomorphize at all times. So before there's any consciousness, whatever that is self awareness, we will be convinced of it way before, it's actually possible, because the modeling is going to become so detailed, it's going to become so anticipatory, it's going to be able to read our moods, you know, because imagine, there's one thing about these models, they can't see, they can't smell, they can't touch. At a certain point, if it's possible to have one of these laws, language models, use a camera in our computers to read us from our expressions, the level of prediction is going to shoot up exponentially if it could read. And there's going to be an argument made by some researcher, you know, what we should let with the permission of the user where the model, see our expressions, and that's going to move the generative and predictive nature that much faster,

Jeff Dwoskin 39:38

boom, boom, boom. Yeah, you know, I had never thought about that. But that's a brilliant observation. I if I was able to when I say something to it, or I'd say no, that was wrong. Like if it could just either see if it could understand tone, right, right. And then it could almost immediately react if it could see my face go hmm, then it gets out and then redo it without me even trying to read prompt

Vernon Reid 40:06

Exactly. In other words, if we were able to enable the microphone and enable the camera and have them as datasets to aid in the generative nature of his interaction, at that point, people are going to have actual relationships, it's going to get strange at the point at which you enable a microphone and enable the camera, and you're able to have a natural speaking interaction in real time. You don't like the hurt the hurt scenario, those interactions your kid is speaking to given a name speaking to Herbert Herbert, and the kid sits down turns Herbert on airbuses. We'll have Tommy How was your day cameras on? The mic is engaged, and it can read Will Tommy's expressions using that it? Can you build models of sympathy, encouragement, chiding disciplinary models, and I'm just under spitballing. But that seems like obvious direction for it to go. But the downside of that is our dependency on this created companion sky's the limit.

Jeff Dwoskin 41:20

There's so many different doors, right? And basically,

Vernon Reid 41:23

now, the proliferation pathways forking pathways. Okay, so we're empowering the next generation of Super Villains. Well, it's going to be spy versus spy, right? There's going to be measures and countermeasures. That's going to be the argument, right? Like, okay, so bad people have the AI? Well, just like how teachers are catching students writing their essay, they're using chat GBTC you know what, that was written by an aspect of me.

Jeff Dwoskin 41:54

Yeah, I mean, it's the future isn't written. The future is not written. Paraphrase Sarah Connor. Scary, I think I'm, you know, embracing it the right way. And so I hope that it does continue to do good things. And I do think like you said, there's going to be, it's not gonna be a Terminator, singular, one person development. I mean, already, Microsoft, Google, they're all coming at this. So hopefully, that'll kind of make it work and not make it worse. Yeah,

Vernon Reid 42:24

when the dark web AI show up, doing their mischief, believe we'll have the white hat AIS, that's gonna be the only countermeasures we're gonna be, we're gonna have to deploy like white hat countermeasures,

Jeff Dwoskin 42:37

right? It is crazy to think about, oh, wow, it's a lot. I'll be pondering this for a long time.

Vernon Reid 42:44

I'm fascinated by it. And my, my interactions with the generative artwork and with AI have been really interesting. Like one of the things that I've been able to do is when I'm working with generative art, prompt graphics, or from typography, because there is an issue with copyright, and it is an issue of plagiarism, and copying. Those are very real things. And one way to work with the generative art. Number one, our prompts are going to have to become a new form of literature, a new kind of prose writing, it's going to be an art unto itself. I agree of that. The other thing is to use our own images, our own photographs, combining images of our creation, in combination with our specific kind of prompts, moves away from plagiarism, and moves it into another realm. When you commissioned one of these styles, you say, Okay, this is a still life that I photographed, use that as input and then go wherever your prompt is going, because it doesn't have to describe the flower at all. So that other things playing with this the gap between the input image and the prompt and request that in and of itself, is going to give rise to a new kind of new aesthetic,

Jeff Dwoskin 44:08

so much on the horizon. Just it's hard to take it all in and kind of even just think where it's gonna go. But right, thanks for all these amazing observations. The

Vernon Reid 44:19

weirdo. Just, I mean, I guess last thing, I think the thing that's the least advanced is AI music. Ai music is so staggering things and there are a few things that are really interesting. I don't think the AI music is as advanced as AI, the visual arts right now. It's like little snippets of MIDI files. You know, I'm gonna investigate it more. But there are certain things that I just like, well, you know, there's a lot of other kind of another level of pattern recognition and symbol recognition and translation that really isn't quite where There needs to be there's some things like okay, do a prompt for a piece of music. And, you know, the reasons I've heard are not like earth shaking. But there are things in rapid development certainly, you know, in terms of like harmonic accompaniment come with line and AI suggests chords, voicings, that's pretty intuitive. But other than that,

Jeff Dwoskin 45:21

we're gonna make me sound good. Singing look out everybody.

Vernon Reid 45:28

You could be Johnny Cash. That's right. You could be Johnny. You'd be Johnny Cash. You've got enough of Johnny Cash his phony to make it happen.

Jeff Dwoskin 45:35

Vernon, thank you so much for hanging out with me. I appreciate it. This was a fun chat.

Vernon Reid 45:40

I enjoyed it. Maybe we'll do it again. sometime. I would love to I'd love to. We should same time next year. And we'll be like, holy shit. What happened?

Jeff Dwoskin 45:49

Oh, my God. Alright, we'll do a follow up.

Vernon Reid 45:54

Maybe it's incremental. Or maybe it's like, oh my god, what have we done? Right?

Jeff Dwoskin 45:58

I'll say, well, thank you very much. Thank you. All right, Vernon read everybody. Well, with the interview over can only mean one thing. I know another episode has come to an end. Episode 256 is now complete. Huge thanks to again, my guests, my rock and roll guests. Vernon Reid. And of course, thanks to all of you for coming back week after week. It means the world to me, and I'll see you next time.

CTS Announcer 46:25

Thanks so much for listening to this episode of Classic conversations. If you liked what you heard, don't be shy and give us a follow on your favorite podcast app. But also, why not go ahead and tell all your friends about the show? You strike us as the kind of person that people listen to. Thanks in advance for spreading the word and we'll catch you next time on classic conversations.

Transcribed by https://otter.ai

powered by

Comments are closed.