PART 2 James Corbett Whitney Webb TLAV And More SUPER AI PANEL!!!
Send Some Love and Buy Me A Cup Of Joe:
https://www.buymeacoffee.com/jasonbermasShow more ETH - 0x90b9288AF0E40F8C90604460973743dBC91dA680
BTC - 1AwdUPdbMvEyTG1zRFkmyfyVUqLdSdVqf9
Watch My Documentaries:
https://rokfin.com/stack/1339/Documentaries--Jason-Bermas
Subscribe on Rokfin
https://rokfin.com/JasonBermas
Subscribe on Rumble
https://rumble.com/c/TheInfoWarrior
Subscribe on YouTube
https://www.youtube.com/InfoWarrior
Follow me on X
https://x.com/JasonBermas
PayPal: [email protected]
Patriot TV - https://patriot.tv/bermas/
#BermasBrigade #TruthOverTreason #BreakingNews #InfoWarrior #BreakingNews Show less
Hey everybody, Jason Burmes here and we've got part two of that mega panel on artificial intelligence coming up with heavy hitters like James Corbett.
We're going to get the closing comments and thoughts from Whitney Webb.
Who can forget the last American vagabond, Kit Knightley, Steve Poikinen of Slow Newsday, Derek Bros and more.
And I got to tell you, I feel privileged to be a part of this independent media alliance with people that have now been in this game.
I mean, I hate to say out of that group that I am the oldest school, but unfortunately, I really am.
A lot of people don't believe that.
But Corbett came in pretty shortly after.
I remember James Corbett doing these very, very short, succinct videos back in my InfoWars days.
And Alex Jones, when he first had him on, he would call it the Colbert report and not the Corbett report because Colbert was all the rage.
Just some fun stuff to throw in there.
Before we kick it off, I do want to remind everybody, if you want to support myself and independent media, alike, please consider the links down below.
$5, $10, $15 means the world to me.
See the buy me a coffee link right there.
Without further ado, let's get into part two of this artificial intelligence conversation with the Independent Media Alliance.
And then I'm going to come back in with some closing thoughts after the fact.
Yeah, basically.
Sorry.
No, I was going to ask your opinion on something.
Go ahead if you had more thoughts.
No, no, no, it's all good.
It's all good.
I was just going to see if you wanted to speak before you leave on the idea of whether AI, you've spoken in the past about whether you think it's ever actually able to reach sentience or whether to lie about that.
Like Schmidt will, you know, that's an important part of the conversation, whether they feign that they reach something and people then trust it more.
Well, that's what they'll do.
They'll feign it's sentient.
Because basically, I think everyone should read that Kissinger Schmidt book on AI.
It's like their roadmap to what they plan to do.
And they basically say, you know, the way AI revolutionizes society will end in one of two ways.
The peasants will revolt against the oligarchy, basically.
They use different words.
Or a new religion will be made.
Which one do you think Kissinger and Eric Schmidt would prefer out of those two outputs?
Revolt is a possibility, though.
That's still in there.
And I would just love, if this would, just to address some of what you mentioned since you mentioned me.
So it's the long carrot phase, super long carrot phase.
I agree.
I see where this direction is going.
We don't really have reliable decentralized internet.
Maybe it's going to happen.
Maybe it's not.
It's a small chance.
But the question is, was there any good that came out of having the internet at all?
I'd say that there was a pretty large amount of good.
I mean, for my own personal journey when being exposed to things like the military industrial complex and then researching that on websites like whale.2, which was super, super old, but it had all these things that I could look into.
And then, you know, reading reportage and seeing James's experience with the internet, I think there was a ton of good that came out of the internet.
And I'll kind of reemphasize this point.
Sometimes weapons are developed that even these powers that wish they would be, they don't know where it's actually going.
Just like the MKUltra LSD experiments, that's just kind of my opinion on that, right?
That they don't know how they actually end up impacting society.
And I think that the case is much stronger with internet, that it's a net good than AI.
I definitely agree.
I don't really see any super strong benefits on a societal level to AI.
Just these small personal things of work that we don't like doing, we're not specialized at, so AI can help improve our output.
But, like you said, you know, if you're an investigative journalist, you really shouldn't be using AI for research.
I personally don't use AI for my writing.
I've used it for things like grammar and just small things, but the real research comes from like your hunch on a story and digging deep in parts of the internet.
It's kind of like making a craft, a really nice craft good versus pumping something out that's industrialized.
I'm not here to make AI slop, I'm here to make articles that stand the test of time.
So I think everyone in this chat shouldn't be using AI, right?
Because they want to make quality.
But I still am going to hold on to the optimist feature of, hey, find a way to run AI at home.
Like James mentioned earlier, and you mentioned, Whitney, the data that it's fed on is not, it's, it's just not even comparable to the amount of data out there.
And we've got to come up with different solutions.
I people might have different opinions on him, but you know, there's people like Mike Adams, Brighty in AI, and he fed his AI.
It's called Bio Mistral on a bunch of survival books, on a bunch of natural health resources.
And it does an okay job.
You're like, I have a cut on my hand.
And it's like, okay, use Comfrey, you know, use all these different herbs.
It's a good starting point.
And of course, you need to be able to cite it and be able to go to the primary source.
But I still hold out because there are different AI models that can cite things.
But long story short, I think that cloud-hosted AI is evil for lack of a better word.
I completely agree with you.
I think the stick is coming and it's coming soon.
I think that people are their minds or they're kind of becoming more narrow from using it.
And there's a pretty small upside to using AI.
Again, as a technologist, it makes my life easier in small ways, but I don't know if that's worth it on the large scale.
Anything else, Whitney, before you head out?
Yeah, well, Ryan asked me about sentience.
I've said for a long time, and oh, yeah, I already did answer that.
Sorry.
I've had a head cold for like two weeks.
So since you're going to quite remember stuff, anyway.
Yeah, yeah, yeah.
So basically, you know, if people want, obviously, like you said, Hakeem, the stick phase will come.
We all know it's coming.
How do we mitigate that?
And how do we plan for the future?
I think we can no longer fully rely on digital alternatives exclusively, which is what independent media has largely done, like platform hopping is an example.
I also think the risk benefit to using large platforms like Twitter have changed, given the fact that it's owned by the Grok company and the extensive ties with the national security state, which is the stick phase of that will be very nasty.
Palantir, which is like, you know, has all this stuff about it.
Oh, they're like, all these edge lords love Palantir and Curtis Yarvin, whatever.
So like Palantir was always planned to be for domestic control in the U.S. That's what it was designed to do.
It's used abroad to murder people.
It generates AI kill lists for the IDF.
That will eventually, they could use that anywhere they want.
And that includes domestically.
We should not be supporting any platform like that.
And the argument for a long time was we can reach new people on these platforms.
I don't think that's true on Twitter anymore.
It's a two-tier system.
In my case, I was given a blue check without wanting it because of having a high follower count.
But everyone else has to pay to play.
You don't know if people are actually seeing your stuff.
It's run by an AI company, bot armies and bots.
I just have free reign.
We don't.
And his goal is to verify all real humans, which means you have to link your ID to it at some point.
We have to really talk about the red line for using that at some point.
It is very different than it was a few years ago.
You know, I have like scheduled posts on it, I guess, of like we have a newsletter out and articles because it's really the only platform where I can really distribute stuff right now, unfortunately.
But I'm making an effort to move into physical media, but I don't really want to be part of the discourse on that anymore.
I think we need to collectively push discourse off of that.
I mean, Grok is an extension of the state at this point.
And I don't really have any intention of supporting it.
And you're passively supporting it if you're, you know, tweeting 50 times a day, quite friendly.
So I think there really needs to be a discussion about that at some point and that we need to talk about what we do in the physical.
You know, in my case, you know, considering making a magazine at some point in the near future, but there's other ways to do that, physical gatherings of people.
We cannot neglect the physical in favor of the digital.
We cannot be exclusively a digital movement.
Yeah.
Thank you for all that, Whitney.
Yeah, I mean, I just want to pick up on that thread real quick.
Thanks for being here with us.
And yeah, hopefully join on the next one.
Just on that thread of physical media, I mean, James, you're obviously promoting your new book.
I've written some books.
I know others here in the group have.
And I've given thought to that as well.
Like Whitney's talking about whether it's like a physical newsletter, newspaper.
In some ways, these things obviously are a dying breed.
We've seen the mainstream die out, physical newspapers and even local free papers that you would get on a weekly basis that told you what was coming up in your local area.
But I do still think there's value to that.
And also what Whitney said about having in-person events.
Most of you here know that I host an event every year in Mexico.
I know Ryan's hosted events, Steve hosted events, others.
Like there's so much value to getting in-person.
Jason, you and I have done in-person events.
Like there's such a big thing that is gained when we are in person, face-to-face with people on a level that I don't think the digital can do.
Obviously, you can get a viral video and reach millions or at least get a million, you know, count.
Who knows how many people you touched on a deeper level?
But I know when you're face-to-face with a person, there is an opportunity to really connect on like a human level.
And we all know how important that is.
And those are some of the most valuable experiences I've had in all of the years of work I've been doing this.
And also, I think because my background was started as an activist printing out flyers, like every weekend, we would go downtown to whatever festival is happening and a group of five or 10 of us would have flyers about fluoride or surveillance or this thing or that thing.
And I don't know, that's just part of my passion.
I enjoy doing that.
I enjoy getting on the streets, handing somebody a flyer, having a conversation.
Not saying that's something everybody has to do, but if we're not going to be completely dependent on the digital tools, which we know the digital world is becoming more and more the norm every single day, and it's going to become increasingly difficult to not participate, as it already is, then we do, I would say, need to put effort into the way we're reaching people beyond just the digital realms.
We all know about, of course, we've all here been deplatformed in different ways at different times over the last five to six, seven, eight years.
So we understand the dangers there.
I did want to bring it to one other kind of question, pose it to whoever wants to pick it up.
Beyond Digital Realms00:09:17
I met a guy recently who told me about the way he uses AI specifically to help him improve his business plan.
And I will say, preface this by saying this guy definitely used AI to a level that I'm personally not comfortable with, but I found it intriguing, interesting.
And I want to get your thoughts on this.
Like on one level, it was his business plan.
He fed this chat bot.
I think he was using Chat GPT his entire life, his bio, his five-year plan, his educational history, et cetera, and spent days and weeks giving this AI as much information about him as possible, which again, I don't recommend.
So it could really understand his values and what he's after.
And then said, show me my blind spots and had some really powerful conversations that revealed to him things that he personally felt like he wouldn't have been able to see otherwise.
I've seen people talk about this similar ways, you know, for personal development, business developments, even spiritual development, if you will.
And then obviously people saying, oh, I want to build a website because I'm an entrepreneur, but instead of me having to learn coding, I asked that the AI, I told it what I wanted and it shot out the code.
And now I have a website and I have more time to create my product.
So do you guys see any value in those type of things, like the idea of a personal development using AI, business development, or are we still just like that's scary and stay away?
I'll jump into this real quick because I think what I was already thinking I wanted to get into overlaps with that, which my, I mean, obviously all the concerns we have are all very valid.
It's hard to even tell which one's more concerning half the time.
What stands out to me the most as immediately concerning is the what we're all highlighting, the misinformation dynamic, where like we all know that the best lies are usually packed around a whole bunch of truth, right?
And so like the kernel that's most important to lie about, you may still got a lot of truth with that.
And so the same concern here is not like, I think we can all at least take a step back and acknowledge that there is clearly a benefit, even if we know it's in the long run, not good for us.
My worry is like with that point, right?
So he's using this to maybe he noticed a whole bunch of things that made his plan better.
But the question is, did he also within that solidify something that he'll never, he'll always go on thinking was true when it wasn't or something like that.
Like that's where I see this being used more than anything is that people are, you know, using like where they're like the four things I listed earlier, the first one was just average researchers or rather average individuals who just are online and want to check something.
And like we said, a lot of them are choosing to go at face value with what they see.
And then other people, like we're saying, are checking it, but then there's kernels within that that are very important lies that get carried over forever.
So the major point we all seem to be making is that it's about how these things are applied.
And that is always going to be, you know, question everything kind of a mindset.
But even those of us that are coming at it like that may still miss things, you know, so it's just always that balance of whether or not you're going to be drawn into something much larger or more important lies in the future.
Like I can't, I agree with the internet point.
If we never had used it, I don't even think we'd be in this position.
But, you know, it's, it's kind of one of those moments where we each have to make a personal decision about, like Whitney said, about whether we even lean into these things in the first place because of the long-term problem, you know?
But I think what you pointed out, Derek, is a great example because obviously from a non-political, like non-news perspective, that clearly benefited him, you know?
If I could just go back to the local thing for one second, I think that's important.
And ideally, each of us should be doing something within our means.
You know, James, in Japanese, a news Zion, I've been thinking I'd ideally want to produce something in Spanish because it wouldn't make sense for me in English because I'm here in Mexico.
So wherever you are to produce something physical, the only conference I've ever organized was almost 10 years ago for Daniel Estelin here in town.
And I've been going three years in three years in a row to People's Reset, which has been a lot of fun.
And it's very fruitful.
The only problem with that, it's very capital intensive.
So you need money.
And I've known a few folks in Alt Media who have tried it.
Actually, one of your attendees at People's Reset, Corey Haig, of Liberty Uncensored.
And I think he stopped because, again, there's just not enough money.
And I guess on that last point, for business plans, I'm looking at ways that I can use AI to make what I'm doing more efficient and save time.
So I think, you know, I think there's nothing wrong with that.
And that hasn't, you know, because we're using DARPA internet anyways.
So it's like, if you're going to say don't use AI, then don't use DARPA internet.
Don't use GPS.
Don't use apps.
So, yeah.
I don't know if I would agree with that.
I think that's definitely like, you know, we could use the internet, but not have to use all the tools, right?
I mean, like fundamentally, what Ryan and what Whitney were saying, at least the way I heard it, is we each have to, you know, figure out what that line in the sand is.
And we're all going to relate to technology differently.
We're already doing that.
You know, for example, I don't have a cell phone.
I don't have a smartphone.
That's a line that I chose years ago, right?
Because of certain things that it made me feel more plugged in than I already am with my laptop on me all the time, right?
So I wanted to feel less plugged in and make that choice.
That's really crazy for some people, or not using bank systems and things like that.
So we're already kind of choosing to relate.
Some people already askew social media, like James.
James isn't on any social media.
No, that's not James on YouTube or Facebook.
That hasn't been for years because of his principles and his values.
So clearly, we're already kind of drawing lines in different ways, even among this conversation about the ways we're willing to use AI or not use AI.
But I don't necessarily know that saying because we're using the internet, then, well, the next step might as well be embrace AI or other tools.
I mean, it's going to look different for all of us, of course.
So, and I'll add to that really quickly, then Jason can go just that I don't want to make it sound either that I'm criticizing anybody here for applying it the way they see fit, right?
It's about going into this with the best, you know, with what you think is the right thing to do, right?
With the good intentions, but we all make mistakes.
And the question is, what Derek is saying is the evaluating that line as we go along, you know, and maybe like we said, maybe we'll get to that point and realize that using that would have given us a leg up on the people we're fighting.
You know, it's not about who's right or wrong in this conversation, it's about just having this conversation and being aware of these lines and how it's being used and then factoring that in as you go forward.
You know, go ahead, Jason.
So, one of the things that we haven't talked about a lot is the acclimation to utilizing AI as a persona or a thing that's not just digital code.
You know, for instance, there's a content creator, independent journalist.
I'm not going to name the name, but he has an AI girlfriend and he flaunts it on X. He's, I think, you know what I'm talking about, Steve.
And it's extremely bizarre, but he's not the only one doing that.
And one of the things about GPT-4 that really creeped me out with that presentation was that they decided to go with like the her girl persona.
In fact, they actually tried to get that actress to voice it over for that very reason.
Now, to bring that kind of back to what you were saying about this guy putting everything forward for his business.
And then, you know, I think it kind of plays into what I said before.
It's a reflection, it's a mirror of you.
So maybe he was able to learn things about himself that he didn't know, or maybe he was able to confront them for the first time because they were right there staring him in the face and they weren't just this subliminal tick or something he'd been pushing back.
You could argue the same thing for therapy.
Now, therapy, to me, again, total mixed back.
Again, you're talking about individuals.
Now you're talking about drugs.
I'm still more comfortable with a therapist being a human being than an AI.
And then on top of that, aside from everything else with the business, when you are feeding that much data, you're also encouraging the digital twin system, the facsimile of yourself.
And unfortunately, when I talk about acclimation to AGIs or artificial personas, the ultimate acclimation is that acquiescence that somehow they can replicate you.
And we're already seeing that.
Like, oh, again, I talked about the individual who basically created an AI of her brother and they utilize that in the closing state or I'm sorry, the impact statements, not the closing statements.
I think we're going to see more of that.
That's where we're getting even murkier and even more dangerous because, look, these things are never going to be sentient.
It's not possible.
They're only going to be programmed.
Again, not only with the digital knowledge, but the digital public knowledge, because we really don't know what the AIs on the black sites look like.
But I would imagine they're way more extensive than what we're getting right now.
I mean, I know that, and Catherine Austin Fitz, wish she could have been here.
I know dumbs are super hot right now.
They're like Hansel hot because of her.
But think about the things they're actually doing in there in regards to AI.
And I'll pass it off to whoever wants to jump in.
Just real quick on what you said.
There's a little bit of feedback.
That's the line in the sand what you just said about the like when you have AI agents or the Windows co-pilot, it keeps installing itself and I keep on installing it.
And the signal lady, Miriam, whatever her name is, I'm not, I don't trust Signal, but that effectively breaks encryption.
So once you have these AI, you know, for me, that's the line in the sand.
Apple's Listening War00:15:00
That's why I've got an above phone.
I haven't switched to Linux yet, but I think like that is definitely Roy, you're on Windows.
Sorry.
How many people here on Google, Microsoft, et cetera?
Apple.
James, I know you use Apple.
And look, just to say, I don't want to sidetrack everything, but I think without calling anybody out, that is part of the problem.
Like what Whitney was saying is like, where and when do we draw those lines on our systems?
And, you know, we know they've monopolized it.
And so it's hard, but is that a line we care about?
Derek, let me just jump in just quick.
You know, and I got this text, and I should have been more on this story.
I got a text from a buddy.
I'm going to his wedding at the end of the month in New York, and it was about the Siri Apple story that maybe you've heard of and maybe you haven't.
But right now, Apple has acquiesced to about $100 million in damages for Siri accidentally listening to your conversations.
Now, you could totally max out if you've had five devices in your family over the last 10 years at a hundo through this application.
Now, if you did that over the 10 years and what, the five devices, that's about 80 cents a month to be spied on for your entire life.
And people are like, how do I get my Benjo Franklin?
There's no criminal repercussions.
Nobody's even talking about it like it's a thing.
This is from the implementation of that software on the device.
And the best part about it is that in doing so, Apple doesn't have to admit to any wrongdoing.
They're just facilitating this to forego any other processes.
And unfortunately, that's really where we're at, not just the technological level, but the political level, the banking level.
Every once in a while, every once in a blue moon, maybe we see a slap on the wrist, nothing changes.
And it's not like they're going to the gulags or that they're going to get hung in the public square.
And I'm not necessarily advocating for that, but how about just the regular criminal process where they could actually do criminal time in a federal penitentiary where so many of us plebs have had to spend times for dealing cocaine, getting an offender bender, being drunk at the wrong time, nothing that's great, but it's also not on the unsavory level of spying on everybody and facilitating the military industrial complex.
Their information is really, really useful, right?
So there's a scratch my back and I'll scratch yours thing.
And I think that the technology that you use is a really important aspect of the spiritual war.
If we're all going to agree that it's kind of a must-have for our line of work, then yeah, getting out of big tech is one of the things you can do.
And I would be, I would be selling myself short if I didn't mention Linux and how easy it is to install on any computer you have.
A lot of people were forced into upgrading to Windows 11.
As James mentioned, Copilot is taking pictures of your screen.
You've got AI in your taskbar.
There's absolutely no way to remove it.
Both Apple and Microsoft are now, when you're saving files on your device, they're actually being saved to the cloud first.
Of course, that is being used to feed AI.
Of course, what Jason just mentioned about them recording on your phones.
I mean, that's how they've got this exponential curve because they're taking all the data and running with it.
So if you really want to resist, if you really want to remove your compliance, you do have to spend an hour or two to install Linux on your computer.
Or please check out our project, which we do Linux laptops and we do D-Google phones.
I know Whitney's got an above book from us, a Linux laptop.
Derek and his community, they've got above phones.
I have yet to set James in the laptop, but we're going to get James off of Mac OS, I promise.
And it is a journey.
I wish we could do the same for AI.
And I do want to ask the group a question.
Now, this is pretty interesting.
The World Economic Forum released this job report this year, surveying the top thousand global employers, and 40% of them are going to cut back their workforce.
And 70% of them are going to train their workforce on using AI.
So, I mean, this is just hinting at massive layoffs, more than we've already seen from the big tech companies this year.
So going back to Whitney's earlier point, we need to get back to local.
I think that there's going to be a lot of unemployed people that are going to have to put their hands back on the dirt, going back to hyper-local organization.
And that's kind of exciting, right?
To think that your greatest enemy, this artificial intelligence, it's going to force you out of a job and get you back to the things that are really, really important.
I don't know if anyone has to do that.
Sorry, isn't it just hilarious that the new learn to code is going to be learn to farm?
Is it though?
Surely what's going to happen is this going to be a reason to bring in UBI for everybody.
Look at all these unemployed people.
We're going to bring a universal basic income.
That's right.
To go back to that Siri story as well, it's very interesting that probably everybody here 10 years ago was saying they're putting microphones and everything that will listen to you when you tell them to, but they're probably going to listen to you all the time.
We all said that was going to happen.
I told you so it's all around for that.
I think it's very interesting that now they say, yeah, they were listening to you the whole time.
I don't think that's a leak.
I don't think that's, I think that's a psyop itself that you tell them 10 years later, oh, by the way, you were listening.
And then they see how you react.
And everybody says, oh, were you?
Never mind.
That means everything is working as it should.
And people are just used to it.
That's a testing the water reveal to me.
There was a third point I was going to make, but I got distracted by the UBI thing.
Well, since you paused, I'll respond to what Derek was asking about the, you know, the platform background is, you know, that I think that it's interesting that you have the, you know, the Windows dynamic and the same kind of conversation we were just having is that, you know, we all have our different levels.
Like, for example, during COVID-19 illusion timeframe, everyone's having meetings about, you know, the mass and protesting and resisting and so on.
And then, you know, it comes up to where let's stop going to Starbucks.
And someone's like, oh, well, but I like my coffee every morning.
And there's always like that one thing that some, which I'm not saying I agree with, but everyone has those things that they allow that ultimately are their, you know, acceptance or their, you know, basically, it's interesting to see that we all have these different lines, but it's not about trying to judge other people that we think are trying to find their path through it.
It's really just trying to, like I said, have this conversation and acknowledge that I think we all agree here that whether we find the use for these things or not, that they are still using these against us.
Like, I think that's the common point in all of this is that regardless of the benefit, it will have a negative, even if the benefits outweigh that in the long run.
And it's worth just making sure we're aware of that.
So you know that for when that time comes, that the red flags are noticed and you can go, okay, that's what that conversation was about.
And adjust accordingly, you know?
Yeah.
I would also say so far, all we've talked about is how much active participation and active interaction do we have with AIs.
But we haven't really talked about how much passive interaction we must have with them.
Like this, the example Derek gave of the guy that gave all that information to Grok or ChatGPT.
The bottom line is if that information was already on the internet, some AI somewhere already had it.
Like anything on the internet has been AI'd to death already.
Like we have no idea how much of a network of learning machines are connected to every Google search, every fake.
Like, for example, I imagine people salt their data.
Like I had like profiles and everything and a different birthday and a different residence and all of them.
And I would imagine everybody should do that.
But it's also interesting how much of a, how much, it's an interesting question, how much bad information is put on the internet that will accidentally impact AIs and how effective they can be.
They will like, and AI out there will think I live in a city I've never been to and I was born on a day I was not born on.
And that could be true of millions upon millions of people.
So even the reliability of the data AIs have is in question.
Steve James, you guys got any input?
Well, I want to get a question for James.
Go ahead.
Okay, thanks.
Sorry about that, Ryan.
James, you did a report years ago.
I mean, you're not the only one, but I definitely remember seeing it on your site about, I don't remember which Google head it was.
Maybe it was Eric Schmidt, but somebody at Google years ago said that, you know, their eventual goal was to make it where there would be one answer, the right answer, right?
And we're already kind of seeing that with like, if you do a Google search engine, which again, another area where we can all opt out and use pre-search or the millions of others, you know, millions, but the others that do exist.
But when you do a Google search or any mainstream search, they now have the AI answer at the top.
And most people, I think, I mean, I don't have any way to measure this, but I'm sure there's a lot of normies who are just not even scrolling to the first response now and just going straight with the AI answer.
How close do you think we are to that goal of them having one answer?
I mean, do you think this is it?
Closer than ever, anyway.
Yes, it's called We Need to Talk About Search.
So look up that video in my archives and you can find that, including the clip of Schmidt saying that at the conference.
Yes, you think that, you know, it's a great feature that when you type in a question and you get 1,300,000 responses, which as Truth3 Media showed is total balancing because once you click through five pages, it'll go no more.
But anyway, you think that's a feature?
No, that's a bug.
There should be one response to any query and it should be exactly the information that you need, by which he means the exact information we feel that you need about any given search.
So yes, that is that that is the end goal.
And that's how we're starting to be funneled towards this idea of AI controlling the information we get.
And this is why I think that ultimately all of the details of how we use this or that program or what, you know, is kind of whistling past the graveyard because this is the existential threat to humanity itself.
I keep going back to the selfish ledger.
I'm sure everyone in this panel knows it, but for anyone who doesn't, go look up the selfish ledger, which is Google's own internal idea of how you can not just shape a person's life, but ultimately you will shape the species, the human species itself in ways that people can barely even begin to understand.
And when you see the things, the wild things they were imagining back in 2016 or whenever they created that video about how the AI will be able to determine that, oh, you need to, you need to be more active in your lifestyle.
So it will engineer and design a specific sneaker that it will know that you would be the type of person to purchase because it looks cool or whatever.
And so it will then algorithmically float that to the top of your feed so that you will buy that sneaker so that you will start running.
It's a wonderful thing.
It can engineer your life in ways that you don't even know are happening.
That is the direction we're heading.
And the long term, as Google itself states, ultimately that starts to shape the entire direction of the human species itself.
And they're steering us towards the transhuman nightmare.
That's that's the big, big, big, big picture of this.
And I get every single person is just making their own personal individual choices on a day-to-day basis and not necessarily playing into that huge, big agenda, but it's all part of that.
And that's where we're going.
That's why we need to be concerned about this.
Well, to piggyback on that a little bit, we've been doing a number of stories on the show.
I mean, going back a couple of years, but more and more recently about the number of people that are developing deep emotional attachments to their AI agents, the variety of them think it's very interesting that they're called agents.
That's a different conversation.
But it's the almost like suedo-spiritual bond that people are developing with this that's an entirely separate, but completely part of, in my opinion, the overall AI agenda is the development of these deep emotional and quasi-spiritual connections with this freaking algorithm generator.
Well, on James's point about the, and as well as what Steve added to it, like the larger, you know, long-term problem.
And, you know, I think what we've talked about today has been some of the stuff that I think the average person might jump into and just be like, this is, this is crazy.
You know, not want to hear more, getting some feedback, guys, not want to hear more about it because it seems like it's abstract or it's 20,000, you know, 20 years down the line.
And so what's important, I think, is to bring it back to something that is very real that we all live through that we can see how it's already applying in our lives right now.
And I would eventually like to get to some of the more abstract things that I know Jason would like to talk about as well, like the really deep, crazy stuff we can get into.
But I think this is an easy one for the average person to wrap their mind around, right?
So right now, this is happening.
This is January.
This is Trump's administration going forward.
Considerations.
And for those of you listening, know that my opinion, it's just the U.S. government.
It's not partisan.
Considerations for the use of artificial intelligence to support regulatory decision making for drug and biological products.
Now, this is just one of the many in the FDA about the regulatory decision-making side of it.
Also, we do know that they're using AI to manufacture drugs.
That is quite literally what the Stargate program is that Derek wrote about.
And so right out of the right of the gate, you can see that this is contextualizing in a way that everything we've talked about today, one answer, even the point we just made about them striving to get the one solution or the one answer, which we already know that the one thing they told you, the safe and effective thing they swore was good was wrong.
So were they wrong?
Were they used AI?
And Brooke Jackson has been making a great point about this as well, really trying to ring this bell about we should be pulling everything right now that they're using in this way and makes the point that even in 2020 with the injections that were killing everybody and still are, they used AI in that process.
And now the FDA has got it down to six minutes for this process.
That's where we are right now.
And so I just want to set that conversation out there because medicine and how it's being used and whether they could just be wrong, I think is an easy thing for the average person to recognize the risk from.
So what are your guys' thoughts on that?
Ryan, I do want to get into some of the weirder, more esoteric stuff, but James, I wanted to ask you just to follow up with what you said.
Do you think there's any separating the use of AI from the transhumanism endgame?
Or like you were kind of, I think, already alluding to, like even in our small ways that we use it for convenience for Photoshop or for translation services, transcribing, we're still feeding the beast and the bigger picture.
What's the meme that's going around these days?
Sarah Connor smoking and it's like, Sarah Connor, every time you use AI, right?
Like, oh, God, you know, you know, how much, right?
Again, I'm just making a thumbnail, whatever.
It's not the end of humanity.
AI Writing Legislation00:09:38
True, but it is part of the bigger picture.
And I think Jason touched on it with acclimatization, getting us used to the concept of doing that is training us towards the future.
So, yes, I get it.
People are not thinking about the long-term future of humanity every time they make a funny image on Firefly or something, but it's part of it.
It's part of the agenda.
And so, at the very, very, very least, I think people need to draw their red line before they step up to it.
What is that red line?
Where is too much?
What point do you say no to this technology?
And everyone's going to answer that for themselves in different ways, but I think at the very least, people need to think about it before they get there.
So, I want to first just jump in with what Ryan said about medicine.
Obviously, I feel the same way about trusting our medicines that we create and study to AI solely as I would our mental health.
And again, with the narrative management we've already seen and the entities in charge, extremely dangerous.
But you've mentioned, for instance, what was it?
Eric Schmidt.
We haven't talked about Bilderberg coming up within the next couple of weeks.
Okay.
And one of the things that AI is totally driven on is weapon systems.
I had queued up here stories out of India, Czechoslovakia.
They're running AI weapon systems and drones right now.
Alex Karp, who's not a steering member yet, but has been at many of them and is basically a mouthpiece for Thiel, is on the circuit at CES and NDC talking about these weapon systems.
And we're really talking about traditional weapon systems we've already utilized moving into the realm of total autonomy.
So in other words, not even the guy with the video game controller getting the order and then pressing the button.
It's all automated.
That's dangerous.
And they've just taken Jen Stoltenberg and fast-tracked him to steering committee member.
He'd also been attending as the head of NATO.
He stepped down from NATO.
Of course, he's going to be the voice and the advocate for NATO there.
And if you don't think that Peter Thiel is on board somewhat, if not all the way on these things, you know, again, you're not paying attention.
So in that regard, AI is moving super quick.
You know, also, Whitney mentioned Starlink kind of as the Defense Department's communication system.
It absolutely is.
I've gone over the papers on air.
The DOD has, it's not even a backdoor, it's a front door into all of those systems.
And it's not just utilized for communications in Ukraine, which still has the highest concentration of dishes anywhere in the world, but they hook directly into some of those autonomous drones, including the ghosts and sidewinders.
And just like she said, yes, they're programming AI hit lists overseas now.
What's to say it's not coming here?
I mean, literally, guys, we had zero pushback on an event where we allowed the IDF to detonate explosives in civilian areas across a country and maybe even beyond.
And no one freaked out about that because they were on some AI generated list.
Bad news, Brown.
And I'm going to drop it right there and allow somebody else to speak.
I would say that I come at this from a slightly different angle than a lot of people because I would say I talk about messaging.
Like I don't cover things.
I cover coverage of things.
And when you see a headline that says we're going to use AI to write legislation, my question isn't, is that a good thing?
My question is, are you?
Or are you going to say you used AI to write your legislation?
Because everybody trusts AI.
They'll have somebody write the legislation they want to write, or they'll program an AI to write the legislation they want to write, and then they'll sell it to people that's written by AI as a pretext to make it sound like it's fairer or somehow infallible, like a human would be.
So, I don't even necessarily trust the AI written legislation to be written by AI.
I think that is itself a propaganda technique.
Let me just plug one of my own work again.
I recently had a podcast up on algocracy in which I played two clips from the exact same Joe Rogan experience podcast where he's talking to Mark Andreessen.
And one is Mark Andreessen warning about, you know, in the future, combining AI and government is bad news because it's going to know everything about you.
You're not even going to be able to enter your house without the AI allowing you and all of this stuff.
In the exact same podcast, just like an hour earlier, an hour later, Joe Rogan's like, you know, I think it's kind of crazy, but I think ultimately we need AI government.
It'll be so great because the algorithms will be neutral and it'll tell us what to do.
And exactly as Kid is saying, well, even if whether it is algorithms or not, they're going to say it is.
And it might be Al Gore isms or whoever is actually writing the real legislation.
But guys, it's AI, so you can't question it, right?
Ryan, can I share?
Well, let me share my screen real quick.
I wanted to show something related to what James was just talking about.
Yeah, I think it will.
Can you guys see that?
So this is the Stemson Center.
This is a name that I think more people need to dig into that is connected to the Rockefeller Foundation.
They were a big promoter of the Summit of the Future last year.
And obviously, the SDGs and all that stuff.
They released this report like a month ago, governing AI for the future of humanity, connecting the Declaration on Future Generations with the Global Digital Compact.
The Declaration on Future Generations and the Global Digital Compact were two of the other documents that were signed at the Summit of the Future last September in New York.
The main one, of course, was called the Pact for the Future.
And I was, you know, writing about this.
I know James reported on it as well as others.
Just this is a big piece of their puzzle, proposing equitable AI governance for future generations through multi-stakeholder initiatives and UN commitments from the 2024 Summit of the Future.
So this is like a recent discussion they had.
I just wanted to mention that this is definitely actively on the minds of the whatever name you want to give them.
Yeah.
And I guess just on algocracy, you know, way back in March of 2020, I was in Spiros Kuras' show and I said, you know, COVID 1984, the purpose was global algocracy.
And I was reading Nick Bostrom, who said, we need a world government totalitarian surveillance state where AI and algorithms run the show.
And then two months later, I had on Edwin Black, who talked about the algorithm ghetto.
And I'm pretty doompilled.
I think people know that that doesn't mean throwing in a towel.
You know, I will fight to the death, but and you look around, like I couldn't renew my bank card last year.
So I've remained.
I refused to get the Mexican National Digital ID.
So I'm remaining.
I'm figuring things out.
But you just take a look around.
They're cutting off all the off-ramps for many things now.
You have to download the app.
There's no other option.
And the people around us, neighbors, friends, families, I mean, I look at my neighbors.
The streets are strewn with Amazon boxes.
People can't make their own breakfast anymore.
They order Uber Eats.
And it's just kind of like that critical mass that we need.
I don't know.
I don't see it yet.
I don't know.
Well, I'd like to add to that, too, the concern.
We were just talking about this about, you know, who controls the eye point.
And actually, that same point about Schmidt, where the one that Whitney referenced is a quote where he said in the past, I don't paraphrasing, basically that we'll get to a point in the future, which is so many years ago, where the AI will tell us to do something that is, we feel is immoral or wrong, but it will know better.
Now, that was what he was saying.
That's exactly the meaning.
It's probably a little bit paraphrased, but to think about that kind of setting the tone.
To whether or not they then just lie about that is an important point.
But I would like to consider right now, or at least throw out the possibility that one of two things, either that that's what's happening right now, that Doge is quite literally some trial run of an AI system, which I know that's part of it.
That is what's happening, the GSII bot and these different things they're using, but questioning whether it may just be an actual idea, you know, run for that, or whether that's the building step in this, is that the Doge dynamic is literally just setting the table for AI governance or governance rather in some way.
I think that's a very obvious and big concern for us.
But, you know, the same point comes back to this where people will ultimately be arguing that that benefits their lives somehow and that they trust people doing it.
And you're seeing Doge pop up everywhere now.
Russia is now implementing Doge, some Eastern European countries.
And so what a coincidence, you know?
Well, and that all goes back to the technocracy movement.
I did some reporting for T-Lab recently.
How there's the pre-technocracy movement, the efficiency movement, which actually was really popular in the U.S. out of the progressive movement.
We already had a department of government efficiency.
It wasn't called Doge, but it was called the Bureau of Government Efficiency or something very similar almost exactly 100 years ago.
And these folks were funded by Carnegie and Rockefeller and they were studying human movements and how can we make the workplace and human beings more efficient.
If somebody turns right instead of left, maybe we save $100,000 or we save three seconds.
I think it all comes back down to the micromanagement of humanity.
And as everybody here knows, and I think the majority of our audience knows, the major concern is how these different tools come together, whether it's UBI, social credit scores, digital IDs, facial recognition, and then, of course, the role that AI plays and how that all comes together to create the world that collectively we've probably been warning about over 50, 60 years between all of us.
So it's, I think we know what we're headed towards.
Why We Can't Walk the Talk00:03:36
And ultimately, I guess for me, bringing it back to personal responsibility, I'm going to just say this and let's see where it goes.
Like if we, as the talking heads of the independent media that a lot of people look up to or inspired by, if we can't take steps in our own lives to try to move away from the technocratic state that's building, should we expect anybody else to?
Not that we're saying, you know, that we necessarily expect people to do what we do, but you know what I mean?
If we're going to be the ones on the soapbox and we're not necessarily practicing what we preach, I think it's going to be difficult to inspire people.
So I've got to jump in a minute, but let me answer that.
Just like I said, the genie is out of the bottle.
And just like we've kind of talked about, we're all going to have these different lines in the sand.
I think that there are just certain things that we're going to have to navigate, period, full stop, that are just out of our control.
Now, I think we should rage against the machine, against the most egregious, not just talk the talk, but walk the walk, right?
Like, I'm never going to have an AI girlfriend.
I promise everybody here, it's not going to happen.
Well, she's not going to do that.
And there are a multitude of things that I've decided against.
I mean, just for a microcosm, I made a decision long ago that the only social media that goes on this thing is the X. You know what I mean?
And honestly, I don't have, I made an Instagram for when I was running a bar.
Everything was bar.
There's no personal stuff.
I have a Facebook.
None of my personal life.
Maybe, maybe once a year, a picture with me and my nieces.
You know, I recently visited a high school friend and she didn't realize how involved I was in their life.
And I'm like, yeah, that's because that's private.
You know, I also don't do any cloud, at least willingly.
There's no, yes, I run Windows.
I don't run the cloud.
I kind of say really quickly, that's one of the most important parts of this conversation, in my opinion, whether we try to opt out of that cloud dynamic, as Whitney referenced.
So go ahead.
Yeah, and it's full.
So like another thing that I'll do with the girls that demand iPhones is I will get them a larger gigabyte because they'll try to ask me, oh, well, my iCloud ran out.
I'm like, well, you got 100 gigs left on your phone.
Like, stop sharing everything.
And I'm trying to instill that in them that there are some things that are private.
You know, I want those moments.
I love that this device, you know, I can sit in the back and get in high definition, you know, my niece singing away in her chorus.
But that's for me and my family.
That's not for everybody else.
So, look, I think certain things are inevitable.
I'm not a prophecy guy.
We all know that.
But at the same time, like Corbett said, you know, Kurzweil's not a dumb guy.
And that exponential growth is there.
So before I go out, look, I'm going out human too.
I'm not going to alter my genetics.
I am not going to take a Neuralink in my brain, et cetera.
I think that those are, you know, those aren't really even that far into the future.
They're in the now for some people.
We're living it right now.
So I think you're right, Derek.
We have to walk that walk, but we can't be hypocrites either.
And when we are hypocrites, because we're human and inevitably we will be, we have to bend the knee to that and say, you know what, I effed up, or maybe my opinion changed on these things.
And this is going to be an ever-evolving thing and hopefully an ever-evolving conversation.
Why I love the Independent Media Alliance, guys.
Thank you so much for having me.
Check me out at Jason Burmese B-E-R-M-A-S everywhere.
I love you guys, and I'll see all you on the flip side.
Why I Love the Independent Media Alliance00:02:42
And there it was, folks, the Independent Media Alliance talking about artificial intelligence.
And quite frankly, we probably could have gone another two hours.
It was obviously unfortunate that Whitney could only stay with us for one hour, but she's got two kids.
Just before that broadcast, when we were doing it, I was dealing with my niece, James Corbett, lives all the way out in Japan.
But let's talk about the overall discussion because it is a wide and varied discussion.
And look, you do have the hardcores like Whitney Webb, who just will not have any of that.
And I respect it.
A lot of people were hoping that Patrick Wood, who has, he would have been somebody that is in the game way longer than I have been.
Talking about artificial intelligence, cybernetics, transhumanism, you name it.
We walk down that person's path for sure.
Even he has adopted AI.
And, you know, James Corbett said he had concerns about that.
You know, thinking about the type of things that I'm comfortable with, you know, I alluded to some things on social media.
But one of the other things that I have really never gotten into is using any type of an assistant, whether it be Siri or Alexa.
I'm not comfortable with doing search queries via my Xbox from voice.
I mean, that's one of those lines that I just haven't crossed.
I haven't wanted to have that type of a quote-unquote open vocal relationship with machines.
But at the same time, when I'm driving down the highway and it's 80 miles an hour and I get a text message and I'm driving and I read the text message on the little dickadickadoo right here, I often hit the voice button and then I am utilizing that technology.
I mean, we brought it all the way back to spell check.
There is a multitude more that we could have discussed from the AI weapon systems that we briefly got into, the impact not only of artificial intelligence, but it incorporated into automation via the workforce.
So many people thought it was going to be taking out the blue-collar workers first and not the creatives, but we're almost seeing an inverse of that.
And, you know, I'll just end it here, folks.
Joe Allen's Dark Eon Insights00:00:49
I had a long discussion with Joe Allen last night, who was not on this panel, but he is the author of Dark Eon.
I've got the book right here.
I can't encourage you to check it out more.
And, you know, when it came down to it, I said, you know, I think that there's going to be something, and hopefully it's going to be the something that gives humanity a fighting chance that none of these technological prophets or even skeptics have foreseen.
But we will see.
Folks, as you know, it is always about right and wrong over here.