All Episodes
May 8, 2022 - Truth Unrestricted
26:55
Technology and Current Benevolent Overlords

Spencer and Jeff examine AI’s explosive growth, like DeepMind mastering chess in four hours and crushing specialized engines 100-0. They praise life-saving tech—such as the bionic pancreas for type 1 diabetes—and cutting-edge biology tools, but warn of social media algorithms weaponizing tribalism via hyper-personalized feeds, monetizing attention with polarizing content like abortion or wars. Defining "benevolent overlords" as unelected tech leaders (e.g., Musk, Zuckerberg) whose ethics alone may prevent misuse, they question whether society’s reliance on their conscience is sustainable without stronger legal or decentralized safeguards. [Automatically generated summary]

|

Time Text
And we're back with podcast Truth Unrestricted, podcast that would have a better name if they weren't all taken.
I'm Spencer.
I'm here today again with Jeff.
How are you doing, Jeff?
Evening, all.
So quick note, as always, we have an email address for anyone who wants to send any info about the podcast, anything you agree with, disagree with, want to talk about your garden and how it's coming in, whatever.
Truthunrestricted at gmail.com for all of that.
And today, we're going to talk about technology and something I call the current benevolent overlords.
That might require some definitions, but we'll get to it.
Technology, I'm not talking about, you know, how well your TV reception is.
Probably much better than it was 30 years ago.
I'm talking about computers, computer science, artificial intelligence.
Algorithms.
The magical algorithms that no one knows what they do, but they're out there doing things for us and to us.
This is a, it's a lot.
Most people don't know that much about artificial intelligence and the magnitude of this thing, the massive leap forward this has been.
But I'm going to give you just a taste of what this is like.
25 years ago, the greatest chess champion lost to the greatest computer that was built for the purpose of playing chess.
And ever since, there hasn't been a human that's beat one of these computers that's set to this task by themselves.
It's been unwinnable ever since.
25 years ago, almost to the day, 1997 in May.
That was Garry Kasparov losing to Deep Blue.
Here's how it is now.
So there's a company called Deep Mind.
They're not the only company that deals with this, but we program these computers to play games because we need some measure to compare them to humans and to each other.
And a game like chess, it's been around for an awfully long time and is played on every continent on the planet by many school children and adults.
It's one of these games that we use.
It's incredibly complex as it is.
It doesn't seem like it should be, but most of the math involved immediately loses contact with the general parts of your brain that will understand numbers.
But DeepMind is a company that's making some of these.
And if I told you that they built what's called a generally intelligent computer, up until some years ago, there was computers made for specific purposes.
A computer made to play Go, a computer made to play chess, a computer made to play an Atari game.
And this is a generally intelligent computer, which means that it's meant to be told the rules of any game and learn how to kick ass at it.
So this thing had no memory of previous games played or a database of previous games that any masters had ever played or anything like that.
It wasn't given access to any of those things.
It just woke up, was told what the moves were when chess, what the goal was, and within four hours, it declared that it was ready to play chess.
And at that time, it played chess against what was then the world's greatest chess computer, a computer made specifically for the purpose of playing and winning at chess.
The type of computer that would be Kasparov.
Yes, yes.
This computer that was built by Deep Mind, it was only awake and alive essentially for four hours.
It beat that other computer that was built specifically for the purpose of playing chess 100 times.
Oh, wow.
It did not lose.
There were some draws, but it did not lose in 100 games.
Wow.
It didn't have any losses in which it learned from that other computer.
It knew enough to beat that computer all the time.
And that face-off took place four and a half years ago.
The current iteration of that is much more powerful already than it was four and a half years ago.
So hopefully this illustrates in the mind of some listeners just how powerful this technology can really be.
Now, we're not really talking about chess computers today.
We're going to talk about computers that are set to the task of influencing us.
So these are similar computers.
They're machine learning computers, which means that they are capable of writing code which runs on their own machine and even can write code that would run on other machines that they might talk to.
And they deal with very large sets of data.
We're talking about, if you can even imagine the number of interactions that happen on Facebook within one month, they deal with that and more.
Like a single machine.
It's not conscious, but it is an entity all in itself.
It's looking for patterns across an enormous amount of data and it can handle it.
So the goal of these, as far as we know, there's some conspiracy hypothesis out there who will try to say that maybe they have other goals, but ostensibly the goal of these machine learning computers is to put more eyeballs on the screens, to increase what they call engagement.
And in the parlance.
I would like to interject there to cite an example of where these machines are used for powers of good, because you said that you wanted to be called out on fallacy and hyperbole.
The statement you made implies that these new learning algorithmic AIs are existing for the purpose of the attention economy.
And that's not true.
It's one thing they have been tasked with doing because it's one thing that a particular group of individuals sees as an excellent use of them for profit.
However, another example would be my daughter, as you know, is a type 1 diabetic.
She has two pieces of technology that keep her alive.
One is a continuous glucose monitor that monitors her blood sugar at all times through a subdermal probe.
And another is an insulin pump that she manually calibrates and manually punches in both a basal drip to give her sort of a baseline of insulin.
And that formula is something that we're constantly having to adjust as she grows and as her dietary and activity habits change.
And then also she needs to manually tell it to give her bolus injections, which is like a big hit of insulin anytime she takes a meal and has a bunch of carbohydrates to process.
Anyways, a computer coder in California, not too long ago, dawned on him.
He's like, well, if we had the pump talk to the CGM, we could remove the human input and we basically have a two-part bionic pancreas.
So he wrote the code to basically make a Bluetooth connection between the pump and the glucose monitor and then wrote an algorithm that interprets blood sugar data as it comes in, builds a baseline, tracks trends, and is constantly tweaking, adjusting, and writing new code.
So now diabetics, just in the U.S., this hasn't been approved in Canada yet, diabetics now have access to this two-part piece of technology where this algorithm, this learning AI, is being used to learn how their body metabolizes sugar and stay one step ahead of that at all times by constantly making micro.
So the CGM is giving the blood sugar information and the AI takes those inputs and adjusts the outputs of the basal drip of how much insulin is going out.
And all of the math that is right now clumsily being done by humans to calculate how much insulin she needs to metabolize a given amount of carbohydrates is now being done far more efficiently by a self-learning piece of artificial intelligence.
Needed to tell that aside, this technology is excellent technology and it has a myriad of fantastic uses that can lift us up as a species.
Oh, yes.
However, we are human and we are fallible.
And so if some dick bag can figure out how to make a profit off of it, that is frequently the first place we'll go with any new technology that comes.
Yes.
This technology has been used to help to understand many of the extremely complex biological and genetic researches that have happened in the last 20 years, understanding how a protein is folded into a new form to engage with another cell, how all the cells and their genetics are functioning together in a larger entity.
And we used to think of genetics as just what you're born with.
And now we understand that there's other genetic changes that are happening as we develop, as we experience things in the world.
Our genes are slightly changing there.
And that's this technology is really helping us to understand all of those things.
It's really advancing us very far in all those fields, but it's also being used for all of our social media.
Facebook has a feed and it is choosing what to show you because as they put it, if it tried to show you everything, it would overwhelm you.
So it selects some things to show you.
And it has a way of trying to determine what it thinks you'll be interested in, which of the humans on your friends list you're more likely to want to hear from or which posts or memes or whatever you'd like to hear from.
Well, and also like simple stuff like local news in your area.
Like there's very simple non-Machiavellian inputs that these bots use, such as like the GPS data on whatever device you log in with.
Like I noticed I had an uptick in my Facebook feed a couple months ago.
All of a sudden I was getting all kinds of stories about what was going on with something with the provincial politics in the province of Ontario.
And I don't follow Ontario provincial politics.
I live on the other end of the country.
I live in BC.
But I happened to be in Toronto for one week for a training conference for my work and was regularly on social media in my hotel room in the evening because there was nothing else to do.
So the algorithm was like, oh, we saw a lot of activity from this geographic area.
Maybe this guy wants to see local news from here.
But it's also just based on simple stuff like how long you linger when you're scrolling.
If nothing else, they'll just pardon the phrase, but throw a bunch of shit in the wall and see what sticks.
Your phone is noticing when you look at it.
If your phone is advanced enough, it is.
And most phones now, smartphones, that is, are advanced enough that they recognize your face when you're facing the screen and they can tell how much time you're spending looking at certain things.
Some people are creeped out by that.
And I get that.
Some phones are more discreet about what they're passing on to the app and some are not.
Some apps won't let you use them unless you let them know a certain number of things about you.
I don't know how many people looked really closely at the end user license agreement for any of these large social media apps, Facebook, Twitter, Instagram, on and on down the line, TikTok, all of them, how much they want to know about you.
And they will say they want to know it so they know what to show you.
So we can show you better content and figure out who to sell you to.
And they are, to be fair, they are trying to show you better content, the things that they believe you're interested in.
And sometimes they're really getting it.
I've noticed some people talk about they go to Spotify and Spotify is really good at picking the songs that they should like, even though they didn't play those songs themselves.
I was actually having a conversation with my daughter about that.
That's another example of the power being used for good or maybe more innocuous, less Machiavellian purposes anyways, is Spotify has this great ability.
You build a playlist for yourself or just troll through your likes songs for a while.
If you listen to the same list over and over and over, eventually, I don't know if it's a setting you can toggle on or off.
I think you can turn it on.
My daughter showed me how and I've already forgotten.
But basically, you'll go through the same playlist four times in Spotify.
I'll be like, I understand what you like, but you have listened to the same songs here a whole bunch.
Here's this other song that my database tells me 9 million people who like these other songs that you like like this song too, but it's not on your list.
Do you like this song?
And you go, oh shit, yeah, I do like that song.
Okay, how about this song?
And like, I've had musical horizons opened for me by Spotify that radio never would have done.
And as far as exposure to new music goes for your, you know, average everyday person, those are your two choices, radio or Spotify.
Right.
Many of us are familiar with the concept that when you go into a grocery store and you go to buy hot dogs and hot dog buns, you'll get a different number of hot dogs in the package than the number of buns and they don't divide evenly.
It's not like you'll get six hot dogs and 12 buns and you can just buy two packages of hot dogs.
No, it's always 12 and eight.
So you're always buying too much of one or the other.
And this happens.
We cynically refer to this happening because grocery stores noticed that people tended to buy these two things together and that they can sell just that much more.
It doesn't seem like much, but if they sell 3% more of a thing across 600 stores, that's a lot.
That's a whole lot.
It's the kind of a lot that comes with almost no extra overhead because they don't have to stock any extra shelves, really.
They don't have to hire an extra person to keep track of an extra product or anything like that.
It's basically free extra.
This is the kind of data that gets put together and packaged across these social media networks.
They notice that if you're interested in this kind of politics, you're also X number of people or X number percentage of people who are interested in political leader X were also interested in interesting thing Y.
And so whenever they get a new person who's interested in leader of political party X, they try to say, oh, maybe this person also likes interesting thing Y and they bring that together.
And that seems really innocuous, but it is a thing that increases a thing we've talked a lot about lately called tribalism.
100%.
Because it tends to introduce the people that like these four things.
Another person comes by and likes two of them.
They tend to get flooded with more of the other.
Two-horned with the other two things.
Yeah, they're all grouped together.
And that's a thing that the algorithm will absolutely notice and has.
And well, I don't think it's healthy, but to me, it's also a fairly minor thing.
One of the worst things that this can do, and it's, this is a thing I call trying to sell 2% more shoes is the idea that all they're trying to do is just put the ads in the right spot.
You don't need to advertise Nikes to everyone in the world.
What you really want to do is advertise Nikes to the people who are likely to buy shoes in the next two weeks.
And this is what targeted advertising really is.
You know, if you mention near your phone that, you know, you've been thinking about getting another dog and you really like miniature Daxons and suddenly there's an ad on your Facebook feed about a miniature Daxon breeding thing near you.
Like, yeah, 100%, man.
That's all part of that.
I'll badly botch or paraphrase a statement I heard not too long ago that was, there's no such thing as free social media.
Everything has an admission charge.
And if you're not paying any money, then you are the admission charge.
Every social media page, like you said, and user license.
It's one of the reasons why I don't have Facebook on my phone because I read the end user license agreement for Facebook and looked at all of the permissions that it wanted.
Like it wanted basically full data access to my phone at all times.
And I was not comfortable with that level of invasion into my privacy.
So I just, I've got it on my laptop computer and that's it.
But it doesn't mean they're not still getting the info and constantly refining my thread because of it.
And like the point that you made about the somewhat innocuous goal of increasing engagement, I don't really think that's innocuous at all.
Like part of it is just banal advertising motivations of just we need eyes on screens so that they see our ads and they buy things.
That's the most simplistic profit motivated purpose of it.
And it was obviously at the forefront of Zuckerberg's goal when he created Facebook and others when they did the same.
But one thing it's really good at, and you sort of touched on it, was not just passively reading us and feeding us things we might like or that they think might hold our attention, but also it can be used to influence us.
I see it in my feed all the time.
I get bombarded with stuff, like the point you made about like, you've got a social group of people who are into these six things and new person joins the group and is only into two of those things.
He's going to get absolutely bombarded with those other four things until he is completely lockstepped with the rest of that group, right?
Like I'm, you know me, I'm pretty lefty.
My feed just gets freaking bombarded some days with it's it's these these bubbles of social media activity, whatever the topic, whatever the wedge issue topic of the day is.
But most of what I see are like short, easily digestible memes that are frequently like straw man or fallatial or false equivalency arguments on whatever wedge issue topic we're talking about today, you know, whether it's the environment or abortion or the latest war or whatever.
Bottom line, issues like abortion, for example, are a very polarizing political issue.
A political party that comes out with a standpoint, either one side or the other on abortion is going to attract a large swath of voters because people generally believe so strongly one way or another about that particular issue that a party stance on that issue alone could sway their vote.
So politicians love wedge issues because it makes campaigning really simplistic.
Because if we can just preach about things that we have, we know we have a huge base of voters that are all going to vote for us as long as we, to tie back to a past episode, as long as we signal our virtues on this particular topic, then we know we're going to be able to maintain power, win that riding or whatever their goal is.
So a lot of these algorithms, they feed those wedge issues repeatedly to just try and increase polarization and increase tribalism and get us all crammed into these echo chambers with a bunch of other people who think exactly like we do.
So that when time comes to vote, we all vote the same.
I think it's time to define this term that I used in the title of the podcast.
What are current benevolent overlords?
Well, you're talking like oligarchs, right?
But friendly oligarchs.
Well, in The Simpsons, The Simpsons, I think it's even still running now, has a long-running joke about whenever a alien species is coming to Earth or whatever's happening, Kent Brockman comes on and he says, I, for one, welcome our new alien overlords or our new robot overlords.
Yeah.
So for me, this is a callback to The Simpsons and Kent Brockman.
But really, we are relying on the people who are running these technologies to be benevolent, to use this technology responsibly.
And the only reason they would have to do so is their own conscience and whatever trust they would have to fall in with the people around them to maintain a good relationship with them.
Because right now, it takes a team of people to change the course of this technology, to change the goal and to give it additional goals alongside everything else.
One guy can't come in and just slip something in and just say, oh, yeah, also do this while you're doing all those other things and not have anyone notice.
But that team is getting smaller as the technology develops.
It takes fewer and fewer people each year to change or tweak the goal of this technology.
Eventually, it will be just one guy that can walk in.
It'll be, I pick on him every week, but Jeff Bezos, who goes in and says to, Alexa, from now on, you're going to do these things.
You know, you're going to listen in and whenever I have someone you identify as a person that is opposing me on something, you're going to get some incriminating dirt on them and record it all.
And the only thing stopping Jeff Bezos from doing that now is his own conscience and the idea that if he did that, he might have a whistleblower in his organization because someone might not like that he did it and decide that everyone needs to know.
And the fact that technically in most developed Western nations, there's laws against a lot of what you just listed.
At what point would they ever catch him?
Well, no, but like that's well.
You signed a user license agreement.
I let the thing listen to you in your living room.
Well, while I don't disagree with you that breaking the laws would be easy to accomplish and may be happening to some degree already.
You can't disagree with the fact that there are laws against a lot of these invasion of privacy issues that comes from this social media animal that we're dealing with disincentivizes it.
Like he's not just a slave to his own conscience or a whistleblower.
The mere statement of the fact that one of the potential consequences is a whistleblower is because that whistleblower is going to blow a whistle about a law that got broken.
Yeah, that's exactly right.
So the law and government oversight are a very, very, very important factor in shaping this technology.
Unfortunately, a lot of less than ethical governments and politicians have recognized that this technology can be very useful for them.
So that muddies the waters even further.
Well, yes.
And you get into the inevitable problem of a conspiracy and that you have to rely on the other people involved in the conspiracy to maintain the same level of secrecy about it as you have.
No, no, not necessarily.
I personally, and I think we touched on it in the conspiracy episode, I take, I don't think umbrage is the right word, I take exception to the idea that this discounting of a conspiracy because the level of cooperation required from the number of disparate bodies is too high and complex to be practical.
Conspiracies of this sort, in my opinion, maybe we just can't call them conspiracies.
It's more like politics.
Politics is nothing other than a series of interlocking, mutually beneficial relationships.
We don't need the majority of the U.S. Senate and the president to personally get into bed with Elon Musk and Jeff Bezos on issues of social media.
But if the purveyors of social media produce a product that benefits a governor and his focus groups tell him, oh, yeah, you know, when we did this campaign on this social media platform, we were able to zero in on a bunch of voters and preach this red meat to them and it worked out at the polls.
You won the seat.
That guy who just went to office when he faces legislation about restricting social media use and content is probably going to be disincentivized to do anything about that because he's directly benefited from it.
Interlocking, mutually beneficial relationship.
Not premeditated, not like men in back rooms smoking cigars.
Yes, I agree.
Absolutely.
And I would never say that it's impossible to have a conspiracy because of that.
It's just more problematic.
But one of the problems is, is that with an ever decreasing size of the team that's needed to do this, if and when you eventually get to a point where it's only one person that can pull that switch, there isn't a conspiracy anymore.
It's only the conscience of the person who has their hand on the switch.
Like say a sole owner and proprietor of one of the largest social media platforms in the world.
On everyone's mind right now is.
What's the, what's the timestamp on your, on the podcast so far?
We're over 45 minutes.
We did it.
We made it.
We made it the entire length of an episode without naming Elon Musk.
Yeah.
I mean, we did now.
Yeah.
Well, we got over the 45 minute mark, so it's okay now.
But that obviously is what sort of moved this episode to the forefront because you had something else planned for tonight, right?
Yeah.
But yeah, it's huge news, man.
And I see your point.
Absolutely.
It was bad enough when Twitter just answered to a board of money-hungry capitalists who wanted as much profit as possible.
Now we have like a legit dragon at the helm with only one conscience and one agenda to be concerned about.
And I, for one, find it very concerning.
Well, as I pointed out, it still would take a team of people to change the goals of the AI that's involved.
Some media outlets have tried to point out maybe, maybe he tweaks it.
Maybe only the political interests that he's interested in make it through to eyeballs and some of the others are turned down on their volume a little bit.
It's hard to say.
If he wants to do that now, he would have to instruct other people to do that for him.
And he'd have to rely on those people to not let anyone else know that they did that.
And that's that would essentially involve some level.
That is a conspiracy by the definition we've established here.
That's not to say it couldn't happen, but in every case, with our current paradigm of ownership, even if we like Elon Musk and we think he's a good guy and he's only doing this, as he says, to have absolutely free speech, even if we take him exactly at his word and he's a good guy and he's got a pure heart and he just spent this money out of the goodness of his heart to make sure that the world stayed democratic, someday he's not going to be here and someone else is going to own that.
And what's going to be in that person's head?
What's that conscience going to be like?
It's not going to be a clone of Elon Musk.
Someone else is going to own it.
And Mark Zuckerberg, it doesn't matter what you think of him.
Maybe he's a great guy and he's doing his best to make sure, keep the world on track because he's got one hand on the tiller, but he won't be here forever.
Someone else is going to own Facebook one day and it won't be Mark Zuckerberg.
And what's that person going to want?
Are they going to be the Jay Jonah Jameson of this thing and just decide that Spider-Man has to go?
It doesn't matter what good he's done for the world.
And that's where we are.
We have really, and I think it is benevolent overlords.
Increasingly, they have more overlord power every year.
And currently, they are fairly benevolent compared to what they could do if they were really James Bond villain.
They could do a lot worse things to us if they were that, but they're not going to be there forever.
And someone's going to replace them.
And what's that person going to be like?
Is that person going to be a James Bond villain?
That's right.
And that's the real question is how long are we going to allow our world to be run by benevolent overlords and just hope for the best?
Yeah.
Well, I think on that note, we're probably done for the night.
Another happy note to end the podcast.
Yeah, I guess so.
All right.
Export Selection