All Episodes
May 17, 2025 - Info Warrior - Jason Bermas
50:49
James Corbett Whitney Webb TLAV And More SUPER AI PANEL!!!
| Copy link to current segment

Time Text
Hey everybody, Jason Bermas here.
A real treat for everybody out there.
We're going to show you part one of a two-part panel that I did on artificial intelligence with some really big, heavy, in fact, probably the heaviest of hitters in the alternative media.
When we're talking artificial intelligence, you want to be talking to James Corbett, Whitney Webb, the last American vagabond, and beyond.
This is an ultra panel.
Stay tuned, and we'll come back at the end.
And...
Hey, everyone.
This is Derek Brose with the Independent Media Alliance, and welcome to another panel.
We've got a full house tonight.
We're going to be diving deep into AI, and we want to get to it as quick as possible.
But before we do that, just a couple of quick announcements.
Ryan, I know you've got something big to share with everybody.
Yes, we want to start off with this pretty great announcement.
Shared rookie mistake.
We're going to get started with this opening from Odyssey, which I think is...
It's pretty cool.
It's a pretty nice overlay of not just the portal that they're starting, the decentralized internet aspect, but really kind of shouting out the Independent Media Alliance.
I'll just give you a couple of the parts that I think were important.
As they just said, decentralization in action, Odyssey welcomes the Independent Media Alliance to Portal.
At Odyssey, we've always believed in giving creators the tools to control their own destinies.
That's why we're thrilled to announce that the Independent Media Alliance, a coalition led by Whitney Webb, Derek Rose, and Ryan Christian, will soon be launching its own portal within what they're calling their emerging decentralized media ecosystem, or DME.
This is more than just a new partnership, they wrote.
It's a powerful demonstration of what's possible when independent voices are equipped with the right tools to protect their editorial integrity and speak directly to their communities without relying on the whims of centralized platforms or algorithmic gatekeepers, which I think, you know, in the AI conversation, that's definitely relevant today.
It says the portal is the product of years of work toward a single goal, building a publishing infrastructure that puts creators in full control with portal.
We're moving beyond video, offering a complete decentralized publishing platform that functions more like an independent website builder, but with freedom, resilience and sovereignty baked into the core.
Every portal, including the IMAs, operates independently.
Creators and collectives manage their own commenting systems, messaging, monetization, everything about it.
No centralized authority deciding what's acceptable, just creators and their audiences building relationships.
With their portal, the IMA, they wrote, won't be just published content.
They'll set their own terms for distribution, engagement, monetization.
They'll lead a living example of what decentralized journalism looks like in practice.
And we're proud, they wrote, to support that every step of the way.
It says, we believe the IMA's move to portal signals something larger.
It's a turning point, not just for Odyssey, but for the media landscape as a whole.
And I really agree with this in a sense that this is a new step with the decentralized internet direction and really just what we're doing.
Financial mechanism in all this, Odyssey supporting this because they believe in what we're doing.
And I just think that's important that we're trying to strike this out in a new direction.
So anyone wants to comment on that, I think it's a pretty amazing step.
And then we'll give it back to Derek and get going.
All right, well, no thoughts on it in general.
I'm just excited to announce it.
Oh, did Derek drop off?
I think he'll be back in a second, but I just wanted to say...
Yeah, go ahead, Keith.
I'll throw it to you.
Go ahead.
Yep, so this is my first panel.
I'm super excited to be here with the rest of you greats.
And yeah, we've been watching the development of Portal, and it is really, really exciting to have that contained.
And I know how hard building software is, so I think when this platform launches Portal, it'll be the perfect way to combine forces together.
And I know we all have audiences, and that's kind of the problem right now with decentralization.
We're all fractured, so here there's going to be some sort of unification.
I think it'll make a big splash.
So I'm really, really excited.
And I'm glad you're here as well.
I'm glad Whitney's joining us today.
And I think this is going to be a really great panel.
And so the main focus of today is focused on artificial intelligence.
And really, just in a broad sense, it's influence, how it's changed everything, but specifically research, investigation, media, and anywhere else we think that relates.
And I'll just kick it off with kind of the overarching...
You know, really the main things that I thought, and then we'll kind of go around, and everyone can give their kind of thoughts on where this should go.
I see this as four main focal points for me that I think stand out, and that is just individual researchers using AI and being manipulated by it.
Ryan, Derek is back.
Can you add him on?
Yeah, yeah.
Give me one second.
Let me...
All right.
Let me keep doing that real quick, and then we'll go back to Derek.
So, my point was, individual researchers is an obvious important point to people being manipulated by AI online.
Then there's journalists that use this and potentially lean into it too much, as well as also maybe blindly trust it.
And medical aspect of AI manipulating, we talk about vaccines, medical research, and then just government in general and the 50,000 ways that could apply.
But we'll also talk about the obvious benefits of how this can be used.
So, go ahead, Derek.
You want to kick us off?
Yeah, so thanks for that, Ryan.
And I want to just go right into it again.
We've talked about everybody here being crunched on time.
I want to start by playing these two videos.
I hope Patrick Wood will join us.
He is part of the Independent Media Alliance.
He was on a past panel.
So I don't really want to speak about Patrick without him being here, but I do believe he'll show up.
And so this kind of came out of a conversation I saw happening online where different people, some who are more well-known than others, or just people following who were like, I'm so disappointed in Patrick Wood and his latest show he said he's going to use.
This is the guy that's been warning everybody about technocracy for decades.
How dare he?
And I know that we have a wide range of opinions in here.
So the main focus, kind of original idea, was just to let everybody here share, have you used AI?
Where?
How is it valuable?
Are you super, like, no AI in any way at all?
And just kind of see where that conversation goes.
So if we could just start, Ryan, by playing just those two short clips.
This is actually from Patrick's show, where he speaks for himself.
And says some things about AI that triggered some folks.
AI, for our own purposes, as long as we can, it'll be censored, it'll be destroyed probably at some point, but for now, it's a useful tool to uncover things that you would not otherwise see on the internet.
Research.
If you don't use it, You're going to be left in the dark, I'm afraid, because if you do use it, well, you're going to see you have more useful information coming into your brain.
So I think that was the main thing that triggered some people out there, his saying, if you don't use it, you're going to get left behind.
I think that was a bridge too far for some people.
But yeah, so I just want to use that to open the door.
And of course, if Patrick shows up, give him a chance to elaborate on that.
Of course, as we usually do, when you go around, guys, please just introduce who you are, where you're working from, or what outlet you have.
And maybe we could also start with, do you use AI?
And I'll just start that and pass it out to whoever wants to jump in there.
First of all, I went and actually defended Patrick in the Twitter.
I didn't think it was that big of a deal, that AI is a tool.
Obviously, we all have concerns here about the direction and the way it's going to be used, AI governance, etc.
But I guess I wasn't that offended by what he said.
For some people, again, who maybe expect those of us who speak against technocracy, they might also think that that means we're anti-technology, which I think is a big confusion some people have, equating technocracy with technology.
So yeah, I didn't really get that triggered by it.
My personal use of AI has been extremely, extremely limited.
I, for example, use it to transcribe videos sometimes so I can get quotes for articles instead of having to sit there and listen and write them as I have done for years and years.
So that is just a time saver.
I've used it recently to transcribe, and I'm talking about two specific programs, not ChatGBT or Grok or these others.
I've used it to transcribe like...
250-year-old handwritten documents about the Jesuits.
That would have taken me a long time to try to look at and decipher myself.
The AI does it, and then I go in, of course, with my human eye.
Check it and see what's wrong.
But I've only used it in limited ways like that.
I've played around with asking Grok some questions to show that it can be wrong.
I've had it fabricate quotes out of books that I have on my shelf when it's telling me, no, it's on page 29, paragraph 2, sentence 3. And we'll just keep going and going until I'm like, look, there's nothing there.
And then it's like, whoops, sorry, I wasted all your time.
I just must have taken some information from other places.
And so, yeah, I'm not an expert on the actual use of it.
But overall, my general view...
It can be very useful for time-saving and things like that.
Of course, the bigger concern is just like with Google Maps and GPS in general, people give up their own navigation skills or other skills.
And we've already seen this.
You see this all over Twitter where people just say, at Grok, is this true?
Instead of doing any research, it's like, Grok, is this thing true?
And then they go with whatever that answer is, and that's kind of where their research ends.
So yeah, those are my initial thoughts on this whole thing.
Well, I have limited time, so if it's okay, I'd like to at least get some of the points that I'd like to present today across since I have to leave at the top of the hour.
I think right now we're at the building trust phase of AI, the carrot opposed to the stick.
And I think that the more that people rely on it, the more they're going to eventually hemorrhage their ability to investigate without it, their ability to think creatively without it.
Or critically without it.
And I think ultimately it's meant to be a gradual process, but it's a process that's intended to happen as people become more dependent on it over time.
You have extremely bad people like Henry Kissinger and Eric Schmidt openly saying this is the intent of it.
Years ago in a book they wrote together that this was basically the intended plan of it.
So now, you know, it's all fine to use for this and that.
We can all find...
You know, reasons, you know, why it might be convenient and all of that.
But ultimately, I've been talking for a few years.
And sorry for being extra congested today.
But basically, convenience is the way to get us to go into this future technocratic system.
A lot of people that have been warning about this for a long time have been, like, waiting for the stick, waiting for the violence.
But it's really meant to be, in my opinion, a way to sort of engineer voluntary acquiescence to these systems that are being built.
And I think AI is the ultimate way to do that.
And that eventually people, you know, if you don't use it, you lose it.
Kind of like Derek mentioned with Google Maps, navigation skills, or mental...
So math and relying on calculators, this is the same type of function, but now it's being used by people, you know, like the, is this real example to AI?
AI hallucinates and presents wrong output all of the time, so how are you going to ask it if it's real or not?
And our controllers, people like Schmidt and Kissinger, want us to do that.
Because they want us to use AI as the arbiter of what is real and what is not, and using AI at the same time to basically spam the digital realm with things that are not true, what we think are true, so that we fail to really have confidence in being able to distinguish between what's real and what's not.
And so, you know, personally, I think, you know, a lot of people are going to rationalize and derationalize.
Their use of AI.
And that's fine.
I'm not really interested in telling people what to do, but I think a lot of people engage with it without being aware of the risks or the ultimate designs of what it is.
And how we as content creators can be very manipulated by it and by algorithms and by commenters that may not be real people or bots of them.
I mean, keep in mind that the biggest bot farms are run by states.
Militaries, generally.
And you have, you know, one of the main social media sites now also being owned by an AI company, but it's run by a major military contractor.
There was a story a few weeks ago about fake AI people on Reddit changing actual people's opinions just by spamming their Takes, I guess.
And humans have, you know, always been swayed by the opinions of the herd and all of that.
And now they have micro data on all of us by having, you know, decades on most people of data of their decisions from the minute to the big.
And are able to, you know, pinpoint specific demographics and herd them to specific conclusions.
And I think really there's not enough discussion about how that impacts our discourse, even as independent media.
Obviously, all of us like to think of ourselves as independent free thinkers, but if you post a video or post a thing and it's spammed by some comment...
You know, bot army that's super negative about it, maybe you won't talk about that stuff again, and you'll think they're real people.
I mean, a lot of us don't necessarily think about that because, you know, the internet, even five years ago, was really different than it is today.
And, well, I have more I could say, I guess, but, you know, I think the risk-benefit of a lot of this isn't...
Not necessarily talked about because people are just kind of reaping the small, convenient benefits of it, which is fine.
I understand people are busy and all of that, but, you know, I personally ideologically view a lot of what's going on like as, you know, we're in a war.
It's not really just an info war.
It's a war for, you know, humanity.
And, like, the human soul and our ability to create.
And if we stop creating and start outsourcing that to AI, you know, we're screwed, basically.
Have you used it at all for your research, Whitney, in any way?
No.
Well, let me go ahead and piggyback on that, since this is a perfect opening.
And just for those listening, I believe there's a lot of different opinions in the panel today.
I'm sure there are.
Yeah.
No, no.
I mean, in a positive way.
That's why we put this together, because all of us are coming at this from the very skeptical perspective.
So I'll make the first step in that direction, where what's funny is I'm right there with Whitney in my skepticism, my concern.
I 100% agree with everything she just said about how it's being used, and there's no real but coming in here.
My point is that I have been very...
Adverse to it.
To the point where I absolutely wouldn't even look at it, wouldn't touch it.
Until the first time I decided to, I said, okay, I was noticing two things was, I think the first example of what some of the other people mentioned today is sort of like trying to find a date that I couldn't find.
For example, on Brave, somehow convincing myself that Brave was lesser evil, kind of a dynamic.
And so that started to open the door for me, which I talk on my show about how I got frustrated about that, but it's trying to resist that.
But then at the same time...
The real part that I started to use it at, and I'm going to keep doing this at least for now because I think there's an irrelevant point in here, even though I agree with Whitney's point, and she's probably right that maybe it's an overall negative, but the whole Grok thing, what I find so interesting, or whatever platform, is that there are people that do blindly trust that, especially from the partisan angle that it's somehow a conservative thing or it's Elon's thing, and so they do use it as sort of like, see, you're wrong because Grok said, but then this interesting dynamic starts happening where they're almost now attacking it because they have...
Lefties, I guess, going, Brock, is that true?
And it's showing that they're wrong.
And so it's like this interesting back and forth where I've started to point at it and say, but look, your thing is saying that you're wrong, not to say that I blindly trust it.
But I agree.
I feel like that is, you know, just I shouldn't be doing that.
Quite frankly.
But I do find there's a benefit in that moment.
And so it's kind of that give and take about whether it's worth it, whether it's a long-run negative, which I do flatly think.
But my overall point is that I do think that it's dangerous.
I think we're training these models for them.
I think we're building our control structure, right?
At the same time, though, I do know that there are beneficial aspects like Patrick's talking about.
I'm sure others, I think, are going to mention the use of this and research how it is very powerful.
But I think I'm right now at the point where...
I don't think I'm going to allow myself, even if I think it's a possibility that will help me, to go any further than I already am, maybe even stop doing that.
But I do see the benefit.
And so it's like, I'm always trying to be careful, like the internet point, that we don't miss the forest for the trees, don't lose a tool that may be used to fight back against the system.
But again, I want to reiterate, I pretty much agree with exactly what he said.
And I think we are hurting ourselves by using it, but it's a difficult time.
I'd like to add on to that.
And first of all, thanks.
It's really good to be here with y 'all and on such a spicy panel, might I add.
For those of you who don't know me, my name is Hakeem.
And I just want to add on and say that I agree with everything that's been said so far.
And we can all agree, even the people who use AI can agree that they're getting dumber by using it if they're depending on it too much.
But I want to push back a little bit on this was designed as a weapon to control people's minds because you could make the same point with the internet that DARPA created it to actually be a surveillance, a system for surveillance that captured data all over the world.
And then it ended up being the reason why we can have so much knowledge from different time periods and do this research.
And it's interesting as a neo-Luddite to oppose it, but you're having to use the Internet at the same time to get your point across.
And I consider myself a techno-optimist, but I was super resistant to AI for a very long time, and I still am very cautious and aware about how I use it.
But I do think like everything that's been said so far, like the feebleness aspect, people don't know how to tell directions, but is there a conscious way that people could be using their maps so they're actually increasing their rate of learning?
And the same way with AI, is there a conscious way that you can use it that actually helps you to get more knowledge and to improve your reasoning skills?
Right now, it doesn't really look like it because 86% of college students, according to a survey, use AI.
About a quarter of them use it daily and professors are freaking out because of this.
But for myself personally, as a software engineer, I was super resistant to it, like I said.
And then I started to use it for small things like debugging.
And one thing I came to understand is that the corpus of knowledge that's out there in the world is so huge that you couldn't possibly expect yourself to have an understanding of everything.
Everything has at least a little bit of knowledge of everything.
So even if it points you in the wrong direction, it's a starting point.
And like Derek was saying, you have to double check every aspect of its use, but it can be used to improve your skills, and I do believe that.
So what I've been thinking about is the bright side of AI.
Is there any that there are things that we can do to...
Stop it from hurting us or being negative as a whole.
And one of that is to not support large AI companies.
So as quickly as I could, I saw what ChatGPT was doing, which I didn't think it was that good in the first place.
But I started running AI at home on an air gap system that's not connected to the Internet, running your own local models, which is a tiny step in the right direction, because at least you're not feeding back to the surveillance infrastructure.
And then also to check in with your own knowledge and intuition and reasoning at every point.
Not just accept any answer it gives you, copy and paste it and present it as your own ideas, but to really be critical of what it's saying and to go back and research it and to use it as a tool and use it to improve your skills.
I think there are some things that have been good about its use, but generally...
Generally, it's like one of these technologies where as a Luddite or a Neoluddite, you want to fight it, but also the genie is out of the bottle, and how do you put it back in?
Honestly, how do you put it back in?
And so that's another question I think would be really useful to talk about.
Are the people using AI, specifically in my mind, I'm thinking the tech companies, are they just going to surpass everyone else?
But I know that's a lot.
Please.
I want to jump in here because, first of all, I don't think you can put anything back in the bottle.
If it's in the public arena, it's in the public arena right now, and you're going to get derivatives of it no matter what.
Now, I want to start this by saying I actually use AI on a daily basis, but not for research purposes, ever.
And I'll get to where I've used Grok in the past, and it's usually...
I find Grok to almost be a mirror image of the narrative that you're willing to accept because it is very deceptive.
Where I end up using AI is Photoshop for my thumbnails.
And again, you kind of see a mirror image of what you feed into the AI.
So if I want a thumbnail to look a certain way, like an artist or some kind of a wallpaper, I grab it and then I prompt it and it'll give me a dozen things in a couple of minutes.
As a graphic designer, you know, my first desk job creating these type of things, obviously I see the moral conundrum.
But at the same time, it is the next tool.
When we talk about, and really the AI conversation I think is going to end up going further than this, but we've really talked about language models and our interactions with them, right?
I think we're in the beta phase of narrative management, this idea of hallucination.
It's programming.
We have to acknowledge that.
So, for instance, when I first asked Grok to just assess my Twitter feed, in the first two sentences it lied about me and said that I was basically this person that said people didn't even exist.
Like, I made people up.
I have the four prompts.
So I challenged Grok on it.
I said, well, who has Jason Bermas ever said didn't exist?
And then it brought up, Flight 93 that I had questioned what happened there.
And I said, well, I never questioned who was on that plane and it apologized to me.
And now it starts placating to you and telling you how smart you are.
Isn't it nice when it apologizes?
Oh, it'll apologize to you.
Now, the really interesting thing is, you know, you brought up DARPA and the ARPANET.
So one of the tools that I did use it for was online showing what the AI will show you about the creation of Google.
And at the very beginning, you know how sometimes you'll get a Google AI representation?
So again, it's all what you feed into it.
And I had discussed how Google really got its launch point from the National Library Initiative, which was DARPA and NASA run.
All right?
So when I talked to Grok about it, Grok obfuscates it, and then you bring it up, and then like, oh, well, you're right.
And then you bring up, it's got contracts with In-Q-Tel and the military industrial complex.
In artificial intelligence and quantum computing, NASA in particular.
And then he tells you, well, that's the conundrum, isn't it?
How can we trust the AI when it's programmed by these entities?
So again, it makes you feel smart, but it's testing the waters all the time on what kind of narrative it can push.
And that is the real danger because, like you guys have all said, people are going to start trusting this stuff for their medicine, their everyday life.
They're about to give up.
A lot of people already have their, you know, to autonomous cars.
Like in Phoenix right now, it's every – it'll be a cold day in hell before I take an Uber that doesn't have a person in it.
I'm going to leave it there because we have such a vast panel and I want people to get into it.
But when we are talking large language models, we're in the beta right now.
We're in the narrative management phase.
And like she said, the trust building phase.
How many people are going to buy into this and a new telemedicine person?
Because We do have AGIs that are getting better and better.
Hell, they just let that dead guy have his sister make an AI-generated impact statement against the guy that killed him, and they admitted that in court.
I saw that.
James, do you want to jump in here?
It's your first time being on the panel.
We haven't heard from you in a while.
Do you use AI at all?
Before I answer that, let me just say, I wish Patrick Wood was here, because, man, I don't know if I'm, like, Kanye-level offended, but I am offended by those statements that he made there.
Yeah, I have things to say, and I'd like to have him here to hash it out.
But anyway, do I use AI?
It's a great question, because as you were talking, Derek, I was thinking to myself, well, no, of course not.
But then you talked about, oh, machine transcription of, you know, reading the Jesuit texts or whatever.
And then I realized, oh, you know, I use the machine transcription all the time.
Like when I do a transcript of one of my podcasts, of course, I'm not sitting there typing out every word.
No, I get the machine-generated transcript.
I send that over to my human editor, Susan, in the U.S., and she goes through it and makes sure it's all right and, you know, puts the words in the right order and corrects the grammar and spelling and all of that.
And then I publish that.
So if you call that AI, then yep, you better believe that I use that.
And in fact, that was one of the things that really first clued me into where this exponential curve that we're on was starting.
Because I always tell the story back in 2013, 2014, whenever it was that YouTube added the automatic captions to their videos.
And I remember turning them on at the beginning just because it was such a laugh.
You would turn them on and they would just be gobbledygook, total nonsense.
And I would just laugh, you know, oh, these stupid machines can't tell anything.
Great.
And then I tried it again like a year later, and it's like, oh, it gets it every single word.
Exactly correct.
And now it's at the point where there are times when I'm watching a video and I can't hear what the person said.
I turn on the auto transcript and it hears it.
Oh, yeah, that was the word.
Oh, right.
So, yeah, that got really spooky really quickly.
And that's the kind of exponential curve we're on generally with this type of technology because Ray Kurzweil is a stark raving lunatic on a number of things.
But I think he's right about exponential technological trends and what that means and how quickly.
This entire conversation in 2025 will seem like something from a previous century five years from now.
Having said that, the only other use case that I have for AI is, as Jason was saying, I have used Adobe Firefly a couple of times.
When Brock was not available to make a thumbnail for me, I need a thumbnail.
I need this image of this thing.
Okay, go for it, Adobe.
Make this image for me.
And I have used that a couple of times.
Having said that, I want to put out a couple of requests.
First of all, to independent media producers generally, please stop publishing articles about this incredible conversation I just had with AI.
That is not a news story.
I don't care.
That is not interesting.
It is not groundbreaking.
AI is going to tell you whatever you want to hear because it is not there to make some sort of case based on facts, logic, and evidence.
It is there to be a persuasive argumenter.
And that's exactly what Whitney was talking about with that, hey, did anyone see that story?
The researchers went into that subreddit about, you know, changed my mind or whatever that subreddit was.
And used AI to basically hone their models so it would become more persuasive at changing people's opinions.
That should tell you a lot about where this is going.
So independent media producers, please stop telling me about your conversations with AI.
I couldn't care less.
Secondly, for people out there in the general audience, please stop sending me emails about your conversations with AI.
I couldn't care less.
Please stop sending me your AI-generated music.
I couldn't care less.
That is not music.
You are not writing songs.
You are putting prompts into a machine and it is generating stuff for you.
That is not art.
That is the antithesis of art.
It takes the human creative element completely out of the equation.
Why would I care about that?
I am on team humanity, not team machine.
So please stop sending me that crap.
James, just real quick on that idea of it not being art.
Obviously, there will be people who take a different position.
I went to this debate years ago between this artist, Android Jones, who does digital art, and people who were pure painters before said, oh, digital art isn't real art.
You're using computers, etc.
And there was a debate between him and then other...
Artists, if you will, who create AI-generated images.
For example, I have a friend who's created an entire tarot deck that is AI-generated, and then he packages it and sells it.
He prompted it until it got the look he wanted.
Do you think it is as simple as just saying that's not art, or do you think that really does come down to just personal opinion, or do you feel strongly about it?
I think it is as simple as that.
That's not art.
I definitely feel that way with music.
I've taken an aesthetics class at university, so I know a thing.
Let me say this.
The only thing I learned in that class is, what the hell is art?
I don't know.
So don't take it from me, I guess, is what I'm saying.
Okay, but just generally speaking on the AI subject, I do have my qualms, as you can see, about AI.
Of course, it is wrong.
It hallucinates.
We know these problems, right?
But you could imagine, oh, that's just some, you know, Bugs, they're going to wrinkle out of this.
Whatever.
They'll iron that out eventually.
More importantly, it is making your brain atrophy.
There are already signs of the cognitive decline of the people who rely, over-rely on AI to do their thinking for them.
But okay, yeah, okay, some people will take it too far, but as long as you know how to use it properly.
Okay, so how about the...
The controlled, centralized nature of the AI implementations that most people use, the Groks, the ChatGBTs, etc.
That obviously is a tool point for potential censorship and control in the future.
I think exactly as Whitney said, this is the carrot phase.
There will be a stick phase.
And you know it's coming a few years from now.
Suddenly, wow, this isn't as useful anymore.
Now it's just telling me all the propaganda stuff.
But that can be countered, like Hakim says, well, just run your own local implementation and make sure it's, you know, air-gapped and don't connect it to the internet.
So you've got your own, as Ernest Hancock often says, I want Jarvis, not Siri, as in I want my own personal implementation of this decentralized off the grid.
Okay, all right.
Also, another problem with this, the only information that AI ever has is digital.
Information that has been digitized.
So all it knows is what is on the internet or what has been pumped in through digital channels.
It knows nothing about the actual real world that has not been digitized yet.
So that's kind of a flaw.
But all of those things are, you could say, well, they'll get better at it.
It'll be better.
They'll refine it.
But no, the fundamental problem is what Whitney points at.
This is an existential threat to humanity.
We are going to lose what it means to be human the more we attach ourselves to these machine implementations.
And it is not some easy step that I'm saying, oh, okay, so just don't use AI and we'll all be fine.
Because I will admit, as Hakeem was talking about there, you know, what's the difference between using internet for your research and using AI for your research?
Hey, look at it this way.
I use Wikipedia.
I use Wikipedia.
Now, how do I use Wikipedia?
I go straight down to the references at the bottom and find the article that they're linking to and go read that and see what, you know, I don't obviously, I don't cite Wikipedia.
Honestly, the people who write those articles, AI told me this, are like the people who cite Wikipedia, right?
Like, no, you don't understand how this works.
Maybe you have a conversation and it tells you something and you go and find it independently and you cite that.
But no, I use Wikipedia.
So what's the difference between that and using ChatGPT, right?
It's just a tool.
I get there is definitely a murky line here and none of you would know.
Anything about me if it wasn't for the internet and my use of this digital technology.
So I'm not floating on a cloud here.
I'm not saying there's an easy answer.
But I know where this is heading.
I know the path we're on.
And I do not want to end up at that end path because we are birthing the silicon intelligence that will be the new life form that eclipses humanity.
And I don't want to live that Ray Kurzweil nightmare.
Thank you.
In the same way that, you know, the internet itself is an extension of a pilot program from DARPA and it stemmed from that, from ARPANET and became that, AI is an extension of that.
Especially where, you know, Grok is concerned, XAI, which started out of Pentagon, out of DARPA, owns Twitter now.
You know, I mean, it's so there's no real pushback, Hakeem.
It's just this is, you know, an evolution of that.
And we're, you know, the convincing argument for a lot of people right now is use AI as your search engine.
Well, if you're not double and triple checking and stuff like that, then you're doing it wrong, you know?
And I mean, a bad carpenter always blames his tools, right?
Like, it's always the tool's fault, but you are supposed to measure a couple of times.
You're supposed to actually do the physical work yourself to gain some certainty, you know, to the best of your ability.
And that implies you doing extra work if you're going to use, you know, AI as a search engine as a jumping-off point.
I use it for background for thumbnails.
And then I do, like...
A handful of other layers of usually trippy or silly shit, but memorable.
That's about the extent of how much I ever want to use it.
Can I have one quick point on that about the picture just because it's interesting about the copyright overlap that I'm dealing with?
It's an interesting thing to think about, right?
Creating the images, which I thought about doing, Jason, because it's an easy way to sidestep that argument.
But that also overlaps with the art point.
You know, it's an interesting point.
Not that I know which way to go with that, but it's worth thinking about that you can create the manufacturer images that aren't necessarily art, that nobody owns, that you can use, not get copyrighted.
And I'm sure they'll end up getting to some argument that you took Grok's AI image or something.
You know, but it's interesting to think about.
Go ahead.
Whoever wants to jump in next.
I'll go next.
I have used AI in the same way Jason and James have said.
Sometimes you just need a picture and you just need it, unfortunately.
Even if you sign up to proxy services and stock services like Adobe Stock or whatever, half the images in those stock libraries are AI generated anyway, so you're just using AI by proxy.
It never even occurred to me to use it for words because I'm precious about my words, I suppose.
But, like, coming at it from, like, a writer's point of view, asking someone to write it for you, it makes you feel uncomfortable.
But I think, I think in some way, the more meta-conversation, the more interesting conversation is, it already is becoming, as somebody said earlier, a partisan issue.
People have their pet AIs, and they'll say, well, this AI is the good AI, and this is the one that works.
I ask Grok, and Grok tells me this, and Grok tells me what I want to hear, or it'll be ChatGPT, or it'll be the Chinese one that was in the news a little while ago, Deep something.
Everybody remember?
It doesn't matter.
Deep-seek.
Deep-seek.
But there will be a whole bunch of people in the West who, like, celebrate China.
We'll say, well, the Chinese AI is different.
The Chinese AI will work differently.
It will be open.
And already that is forming the question of which AI we use rather than should we use it at all, which is already a very dangerous concept.
They always do it that way.
Which vaccine is the best?
Which AI is the best?
The assumption is we're all going to have to at some point.
But then also, there is a question of what qualifies as AI.
We're talking about the steep curve here, but my computer's been automatically spell-checking everything I've written since I was a teenager.
I mean, that is technically an AI there.
But that has actually people's ability to spell.
And if you were an incredibly cruel person at some point in about 2000, you could have told everybody spell-checks to spell words wrong, and there'd be entire generations of people that didn't know.
They would change their right-splitting to wrong-splitting just because of the red squiggly line.
And that is a very dangerous position to be in.
Yeah, so is there anybody in this group that feels like, Hakim kind of pointed to it, James touched on it as well, but the idea that there is some ethical, moral, or way to use AI that defends our side.
So, for example, there's Venice AI that's an open-source, privacy-based...
AI chatbot that says it's not going to censor, etc.
And it's a little different than Grok, I would say.
I'm not vetting it or vouching for it or anything.
But there are People who are trying to do things like that.
I've also seen friends.
One of my friends developed an empathy bot, as he calls it.
It's an AI model that he fed nonviolent communication and all kinds of other, you know, communication tools and use it for like a therapy bot for people, which, again, I know some people just feel like any use of it, anything that replaces a human is scary or is weird.
I'm not necessarily taking a position, but I'm curious.
Does anybody feel like there is a positive use case besides the sort of functional?
Methods we talked about here, translating things, transcribing, creating images, or is it that you individually feel like stay away from it as much as possible?
I think what Patrick was saying in the beginning, which again, I hope he does show up, was like, I think his exact words were you're going to get left behind if you're not using AI, right?
And so it's the kind of argument which Elon Musk and others use as well about certain things and say we need to embrace it.
Being left behind is actually beating the system that they're trying to trap us in.
That could be true, too.
What if we don't want to be a part of whatever they're building?
I mean, which I don't think most of us here want, right?
Yeah, so just thoughts on that.
Left behind from the brain chip.
I don't know.
Oh, no!
I want to hear her voice, because you haven't been in yet.
But before...
Sorry, I just want to make a stupid point related to what Kit just said about the spellcheck thing.
Because, actually, I think it's a deep point.
The red squiggly line tells you you've spelt it wrong.
But, for some reason, that's always default set to American spelling.
So it's always trying to tell me that there is no U in the word color.
I know there is a U in the word color.
It's biased.
And so I, eventually, over decades of this, I start to forget.
Wait, Cancelled has two L's, right?
Right?
It's not one L like those silly Americans.
Or wait, no, maybe it doesn't.
And I start questioning myself.
It's a stupid example, but it is an example of how this can subtly change and manipulate your behavior over long periods of time until you start thinking more like the machine than like a real human being, i.e.
a Canadian.
Yeah, I guess I'll chime in.
I think you're both right.
So I agree with James that the end game is the evolution of man.
So this trajectory that we're on.
But on the other hand, it's like I don't feel the line in the sand has been crossed because we're using DARPA internet.
You think of the search engines like Google, ChatGPT is the next phase of the search engine.
And I kind of have a guerrilla mindset that we kind of have to use whatever tools that we have.
So for me, because the other alternative you guys already mentioned is go live in the forest.
Go live in a homestead in an analog sort of world.
I'd like to do that.
I'm trying to, but it's very difficult.
And so I think we just kind of got to make do.
But I think eventually maybe there will come a point where we'll reach that line in the sand as a collective or as individuals.
And it'll be different for different people.
And we'll say, no, I'm not crossing that line.
And for me, the biggest threats when it comes to AI is the dumbing down, which was already mentioned.
We're seeing it all over the place.
I had experience this week.
I was a guest on the podcast.
Great people.
Bless them.
But we had different time zones.
They're U.S. central time, and Mexico no longer observes daylight savings.
So I'm on mountain time.
And we were arguing.
I told them twice.
I'm like, it's a one-hour time difference.
And they showed me a snapshot of Brave Search AI, which was wrong.
So Brave was wrong.
And I'm like, well, I mean...
I think I know what time zone I live in, you know?
And so that was just a miniature example.
And my other big fear, you know, I talked on my podcast, I had jobs to Landgreban.
He's a German scientist, and he says he wrote a book on why AI will, why machines will never rule the world.
And I think, I don't think that there's going to be this sentience that AI receives, but I think the biggest threat is what you guys have mentioned already is what I talk about a lot, the social credit system, the algorithm ghetto.
And so it's just going to be this tool that allows them to bring in all this data, sift through it for the digital ID systems, for the digital passport system, for the weapons systems like Lavender, right?
And the spy net, I joke, that's always becoming self-aware.
And so I think that's the big threat that ultimately allows them to sift through the info that's going to be used against us.
Let me say this.
I thought that that was the direction we were going to be in.
I was going to be talking more about, for instance, Sam Altman.
Just moved the WorldCoin, you talk about social credit scores, the blockchain project to the United States, and they're launching in five different states.
They've also taken the Loch Nahr-like orb, and they've made it into a cell phone-like device.
And they've also rebranded from just WorldCoin now to WorldID.
So they're really pushing that.
But I want to answer Derek's question.
You know, he asked if there's another positive thing that I think AI can be used for, and I got to say yes.
I think parody is a big thing because you don't have to have great production value.
Now, this video hasn't gone viral, and I don't think it's going to, again, because of the algorithms.
But somebody did a really good deepfake of Donny T. I think when he was announcing the new European trade agreement.
Instead, it's him kicking off the smoking gun of 9-11.
World Trade Center Building 7. And then he talks about the passports and five other big talking points.
And then it's the Alex Jones was right jar at the end.
But I think things like that are positive.
Now, you said music.
I got two quick things on that.
Number one...
My parody song of the year is AI.
It's I Glued My Balls to My Butthole Again.
Fantastic track.
Can't recommend it enough.
Please go check it out after this.
I'm sorry.
It's real.
When we talk about another type of AI-driven music, here's an even more bizarre story if you thought I was going to kick that one out.
So there's this gentleman who was a really experimental instrumentalist, an orchestra guy.
In 1965, In front of a live audience, he hooked electrodes up to his brain and ran a full orchestra then.
Now, he's been dead for about four years.
You can go watch the video.
I forget what his name is.
I think it's Alexander Lucius or something like that.
Go watch my video on it or I'll email it to you.
Just two weeks ago, they did an art show where they had taken his stem cells before he died.
They had built organoids out of them.
For those that don't know what organoids are, they are artificial brain content from your genetic material.
They hooked them into some type of electronic device, and he post-humanously conducted a new live musical performance from those organoids.
Now, obviously, I don't think that's him.
I think that's transhumanism in the other direction, kind of accepting more and more of the merger between machine and man through our genetics and kind of lessening what we are as humanity from the other side as well.
So I just had to throw that in there, and anyone can take it from here.
Thank you.
Yeah, so I want to bring up...
The argument that initially was forwarded by Hakeem about, you know, we're here on the internet.
So I personally think all of these technologies that have at their inception some sort of national security state component that have become widely adopted have a carrot and a stick phase.
And I think the internet has had a very long carrot phase.
But if you go and look at the origins, really, of the internet in the 90s, it was a lot of framing it as decentralized.
And all of this and getting people on it, getting people to trust it and become dependent on it, which now the world is, especially economically.
And then you, you know, oh, you want access to the internet?
You have to use our digital ID system or our, you know, our programmable surveillable money system, etc.
Like that is the stick that we all kind of know is going to come at some point, right?
Because it's centralized through the ISP.
So you want access to the internet, you have to go through an ISP and how many meaningfully...
How decentralized, freedom-loving ISPs are there?
Well, not many.
And for places that didn't really have access to an ISP before, they're all being herded onto Starlink, which is basically an extension of the U.S. military, right?
And so I think, you know, in addition to that being the case for AI, I think it was true for things like Bitcoin, which was framed as initially decentralized freedom money.
Did it have that potential just as the Internet had that potential at one point?
Sure.
But now it's being herded into not being a medium of exchange, which is the only use case, in my opinion, that would allow it to be any sort of freedom money.
And it's moving into, you know, strategic reserve asset for Larry Fink and, you know, the oligarchy.
And combining it with stablecoins and all of that makes it an Orwellian system that's no good.
So, at least no good for freedom.
Depends on who you're talking about, I guess.
And so I think AI is ultimately going to go in that same direction.
Are there decentralized AI companies now that are small, little guys?
Sure, but at some point, you know, the big guys are going to take the cake.
Right.
Because AI is only as good as the data it has access to and it trains on.
Who can afford to have access to the big troves of data?
You know, the people who really teamed up with the state that hoovers all of that data up about us consistently.
And I think...
A lot of it, I also want to bring back my point that I made much earlier on, about how a lot of these incentives of the algorithm as independent media creators have changed us, maybe, compared to when we started independent media.
For some of that, that was about a decade ago or so.
And for me, and I can only really talk about my case, but my interest is, I like...
I have content that I am interested in writing about.
I want to put it out.
I'm not a fast content person.
I put out stories when I have them.
Sometimes, like lately, I've had to do a lot of kid stuff, so I haven't been writing as much.
But I don't personally have to put out a thing every day or every week like a lot of people do.
And so, you know, I'm a little de-linked, I guess, from some of those incentives, but people that have them, you know, there can be a pressure to talk about certain topics that they feel like everyone is talking about.
Incentivize to, you know, respond to negative comments or get involved in online drama and all of this.
And I think a lot of the way social media has developed recently and the censorship stuff has happened, there's, like, conversations that need to be had that haven't been had.
Like, I think inevitably platform hopping, which has been, like, the independent media response to digital censorship, is eventually it's a dead end.
You know, because originally it was like, oh, we're going to go from YouTube to Rumble.
Oh, the Rumble that like JD Vance invests in and Peter Thiel invests in and like the whole Palantir crowd has the data of.
So much better.
Right, guys?
So, I mean, eventually, you know, I'm not saying there aren't good platforms, but, you know, we saw it with Gab several years ago.
They pulled it out by the roots, by the DNS.
And ultimately, you know, a lot of that can be done.
To anything digital.
I'm working on putting all of my stuff in physical form because I really think physical media is the future.
Something people can read and hold and they can't just remotely censor it or cut it off from people.
I would love to...
I just have to go in four minutes.
I lost my train of thought there.
Folks!
I really hope that you enjoyed part one of this panel.
You'll be able to watch part two shortly.
Remember, this guide is not about left or right.
It is always about right and wrong.
Export Selection