Amjad Masad, ex-pro Counter-Strike player and Replit founder, argues AI like his platform democratizes coding—helping non-experts build apps (e.g., Joan Cheney’s profitable creations) while debunking tribalism in science and media. Rogan critiques vaccine mandates tied to Reagan-era liability shields, lab-leak suppression by journals like Nature, and AI-driven misinformation, contrasting Twitter’s 80% bot activity with independent journalism’s potential authenticity. Masad’s squat analyzer app, powered by Google Gemini, showcases mobile programming’s ease, while his jiu-jitsu and powerlifting insights reveal AI’s domain-specific limits versus human adaptability. Their discussion ties tech’s disruption to job displacement fears, with Masad citing China’s long-term manufacturing dominance and Meta’s $15B AI investments—yet questions whether superintelligence is achievable without broader cognitive transfer. [Automatically generated summary]
So like that, I mean, imagine something that, like a pill you could take that would give you a 37% decrease in errors and a 27% faster task completion.
That would be an incredible pill.
Like you would make every surgeon take it.
Did you take your video game pill before you do surgery?
Hey, man, don't operate on my fucking brain unless you take your video game pill.
But Jamie and I were talking about the one thing, and maybe that's kind of showing our age a little bit, but the one thing that's kind of like a little weird slash, I don't know somehow, like a little dystopian is the whole streaming situation where like kids are not like playing the game, they're like watching someone play the game.
Yeah, it's almost like someone is like there's this strange thing with technology where like someone is living life and doing things and you're like sort of it's almost voyeurism or something like that about it.
You know, David Foster Wallace, you know, the guy from Infinite Jest, wrote an essay on TVs.
And, you know, he committed suicide before like, you know, the emergence of mobile phones and things like that.
But he was very prescient on the impact of technology on society and especially on America.
And he was also addicted to TV.
And he talked about how it activates some kind of something in us that is something in human nature about voyeurism.
And that's the thing that television and TikTok and things like that activate.
And it's like this negative, addictive kind of behavior that's really bad for society.
I definitely think there's an aspect of voyeurism, but there's just a dull drone of attention draw.
There's a dullness to it that just like sucks you in like slack jawed.
It is watching nonsense over and over and over again that does just enough to captivate your attention, but doesn't excite you, doesn't stimulate you, doesn't necessarily inspire you to do anything that is the first fly we've ever had in this room.
You know, they can take pills and like kind of, I mean, I'm sure eventually their life falls off the rails, but it's like sort of semi-they're semi-functional when they're on these things.
There's a dude, I watched like a YouTube video, but like he's known for having this contrarian opinion on drugs that you can like control it, like you can, you can do these drugs.
They get hamsterized, get these black eyes where their soul goes away and then they're just off to the races and picking up hookers and doing cocaine and they find themselves in Guatemala.
He snuck in because there's a lot of steps that motherfucker has to go through to get into this room.
I think a lot of people are very health conscious.
That's the rise of cold plunging and sauna use and all these different things like intermittent fasting where people are really paying attention to their body and really paying attention and noticing that if you do follow these steps, it really does make a significant difference in the way you feel.
And maybe more importantly, the way everything operates, not just your body, but your brain.
It's like your function, your cognitive function improves with physical fitness.
And, you know, if you're an ambitious person and you want to do well in life, you want your body to work well, you know, alcohol is not your friend.
It's like, this is such a good, and I feel like cold plunge especially kind of, it's just something, regardless, health benefits or not, something about it, like just mental toughness, like trying to do it every day.
And I think a lot of what discipline is for me is that, again, even keto and I did carnivore and these diets, like, I'm not sure how much health benefits there is.
I feel like keto is really good on your blood sugar and keeps you kind of on a, you know, even keel kind of throughout the day.
But for me, whenever there's like a lot of chaos in my life, I look at what can I control.
Yeah, not only guides you through it, it codes for you.
So you're sort of, you know, programmers typically think about the idea a little bit, about the logic, but most of the time they're sort of wrangling the syntax and the IT of it all.
And I thought that was always additional complexity that doesn't necessarily have to be there.
And so when I saw GPT for the first time, I thought this could potentially transform programming and make it accessible to more and more people.
Because it really transformed my life.
The reason I'm in America is because I invented a piece of software.
And I thought if you make it available to more people, they can transform their lives.
So my family is originally from Haifa, which is now in Israel, and they were expelled as part of the 1948 Nakba, where Palestinians were sort of kicked out.
Our position, every modern Palestinian that I know, their position is like two-state solution.
We need the emergence of the state of Palestine, you know, and that's the best way to, ending the occupation is the best way to guarantee peace and security even for Israelis.
But yeah, it's just like it's used, it sort of reminds me, you know, in tech, we went through this like quote-unquote woke period where you couldn't talk about certain things as well.
Buying Twitter is the single most impactful thing for free speech, especially on these issues, of being able to talk freely about a lot of subjects that are more sensitive.
One is the targeting of migrant workers, not cartel members, not gang members, not drug dealers, just construction workers showing up in construction sites and raiding them.
What attracted him to this country from the moment that I was aware and we started consuming American media and American culture is freedom, is the concept of freedom, which I think is real.
I was watching this psychology student from, I think he's from Columbia, but he has a page on Instagram.
I wish I could remember his name because he's very good.
He's a young guy.
But he had a very important point, and it was essentially that fascism rises as the over-correction response to communism.
And that we essentially had this Marxist communism rise in first universities, and then it made its way into business because these people left the university and then found their way into corporate America.
And then they were essentially instituting those.
And then the blowback to that, the pushback, is this fascism.
I'm not an expert in it, but the idea is that communism, fascism, and even some form of capitalism that sort of we're living under right now is like managerialism is the idea that capitalism used to be this idea that the owner-founders of those companies, of capitalist companies, were running them.
And it was like true capitalism of sorts.
But both communism and fascism share this property of centralized control and like a class of people that are sort of managerials.
And maybe those are the elite sort of Ivy, Ivy League students that are trained to be managers and they grow up in the system, kind of bred to become like managers of these companies.
And today's America is like trending that way where it is like a managerial society.
In Silicon Valley, there's like a reaction to that right now.
People call it founder mode, where a lot of founders felt like they were losing control of their companies because they're hiring all these managers.
And these managers are running the companies like you would run Citibank.
And then a lot of founders were like, no, we need to run those companies like we built them.
And Elon is obviously at the forefront of that.
I once visited XAI when they were just starting out, Elon's AI company.
I mean, my opinion of talented people, people like Elon, things like that, is that we should be in the free market.
I think you can do little change in government.
As best we can sort of expect of our government to get out of the way of innovation, let people, let founders, entrepreneurs innovate and make the market more dynamic.
But again, going back to this idea of materialism, if you look at the history of America, one really striking stat is the new firm creation, new startups in the United States have been trending down for a long time.
Although there's all this stock of startups in Silicon Valley and all of that, but in reality, there's less entrepreneurship than there used to be.
And instead, we have the system of conglomerates and really big companies and monopsony, which is the idea that there are the banks or BlackRock competitors as well, owning all these companies.
And they implicitly collude because they have the same owners.
And all of that is sort of anti-competitive.
So the market has gotten less dynamic over time.
And this is also part of the reason I'm excited about our mission at Replit to make it so that anyone can build a business.
Actually, on the way here, your driver, Jason, is a fireman.
And so I was telling him about our business.
And he does training for other firemen around the country.
He flies around.
And he does it out of pocket and just for the love of the game.
And he was like, yeah, I've had this idea for a website so I can scale my teaching.
I can make it known where am I going to be giving a course, put the material online.
And we were brainstorming, potentially this could be a business.
And I feel like everyone, like not everyone, but a lot of people have business ideas, but they are constrained by their ability to make them.
And then you go, you try to find a software agency and they quote you sort of a ton of money.
Like we have a lot of stories.
There's this guy.
His name is Joan Cheney.
He's a user of our platform.
He's a serial entrepreneur, but whenever he wanted to try ideas, he would spend hundreds of thousands of dollars to kind of spin up an idea off the ground.
And now he uses Replit to try those ideas really quickly.
And he recently made an app in a number of weeks, like three, four, five weeks, that made him $180,000.
So on its way to generate millions of dollars.
And because he was able to build a lot of businesses and try them really quickly.
Without the big investment, without other people, which at some point you need more collaborators, but early on in the brainstorming and in the prototyping phase, you want to test a lot of ideas.
And so it's sort of like 3D printing, right?
Like 3D printing, although people don't think it had a lot of impact on industry, it's actually very useful for prototyping.
I remember talking to Jack Dorsey about this, and early on in Square, they had this Square device, and it was amazing.
You would plug it into the headphone jack to accept payments.
Do you remember that?
And so a lot of what they did to kind of develop the form factor was using 3D printing because it's a lot faster to kind of iterate and prototype and test with users.
And so software, over time, like when I was, you know, I explained how when I was growing up, it was kind of easier to get into software.
Because you boot up the computer and you get the MS-DOS, you get the, it immediately invites you to program in it.
Whereas today, you, you know, buy an iPhone or a tablet, and it is like a purely consumer device.
It has like all these amazing colors and does all these amazing things, and kids get used to it very quickly, but it doesn't invite you to program it.
And therefore, we kind of lost that sort of hacker ethos.
There's less programmers, less people who are making things because they got into it organically.
It's more like they go to school to study computer science because someone told them you have to study computer science.
And I think making software needs to be more like a trade.
Like, you don't really have to go to school and spend four or five years and hundreds of thousands of dollars to learn how to make it.
You know, it's like, you know, you and I are into cars, right?
Like, I don't really have to tune up my car anymore, but it's useful to know more about cars.
It's fun to know about cars.
You know, if something happens, if I go to the mechanic and he's doing work on my car, I know he's not going to scam me because I can understand what he's doing.
Knowledge is always useful.
And so I think people should learn as much as they can.
And I think the difference, though, Joe, is that when I was coming up in programming, you learned by doing.
Whereas it became this sort of like very sort of traditional type of learning where it's like a textbook learning.
Whereas I think now we're back with AI.
We're back to an era of learning by doing.
Like when you go to our app, you see just text prompts, but a couple clicks away, you'll see the code.
You'll be able to read it.
You'll be able to ask the machine, what you did there.
We just announced a partnership with the government of Saudi Arabia where they want their entire population essentially to learn how to make software using AI.
So they set up this new company called Humane, and Humane is this end-to-end value chain company for AI, all the way from chips to software.
And they're partnering with a lot of American companies as part of the coalition that went to Saudi a few months ago with President Trump to do the deals with the Gulf region.
And so they're doing deals with AMD, NVIDIA, a lot of other companies.
And so we're one of the companies that partnered with Humane.
And so we want to bring AI coding to literally every student, every government employee.
Because the thing about it is it's not just entrepreneurs that's going to get something from it.
I mean, whatever creativity is, whatever allows you to make poetry or jazz or literature, like whatever, whatever allows you to imagine something and then put it together and edit it and figure out how it resonates correctly with both you and whoever you're trying to distribute it to.
I mean, we don't really have a theory of consciousness.
And I think it's like sort of hubris to think that consciousness just emerges.
And it's plausible.
Like I'm not totally against this idea that you built a sufficiently intelligent thing and suddenly it is conscious.
But there's no, it's like a religious belief that a lot of Silicon Valley have is that there's consciousness is just like a side effect of intelligence or that consciousness is not needed for intelligence.
Somehow it's like this superfluous thing.
And they try not to think or talk about consciousness because actually consciousness is hard.
And see if you can find that so we could define it so I don't butcher it.
But there's people that believe that consciousness itself is something that everything has and that we are just tuning into it.
Morphic resonance, a theory proposed by Rupert Sheldrich suggests that all natural systems, from crystals to human, inherit a collective memory of the past instances of similar systems.
This memory influences their form and behavior, making nature more habitual than governed by fixed laws.
Essentially, past patterns and behaviors of organisms influence present ones through connections across time and space.
But there are philosophers that have sort of a similar idea of this sort of universal consciousness and humans are getting a slice of that consciousness.
Every one of us is tapping into some sort of universal consciousness.
By the way, I think there are some psychedelic people that think the same thing, that when you take psychedelic, you're just peering into that universal consciousness.
I mean, the experience is so baffling that people come back and the human language really lacks any phrases, any words that sufficiently describe the experience.
So you're left with this very stale, flat, one-dimensional way of describing something that is incredibly complex.
So it always feels, even the descriptions, even like the great ones like Terrence McKenna and Alan Watts, like they're descriptions that fall very short of the actual experience.
Nothing about it makes you go, yes, that's it.
He nailed it.
It's always like, kind of, yeah, kind of, that's it.
It's a real problem, I think, with our world, the Western world, is that we have thrown this blanket phrase.
You know, we talk about language being insufficient.
The word drugs is a terrible word to describe everything that affects your consciousness or affects your body or affects performance.
You have performance-enhancing drugs, like steroids, and then you have amphetamines, and then you have opiates, and you have highly addictive things, fenced coffee.
I mean, you could certainly get psychologically addicted to experiences.
I think there's also a real problem with people who use them and think that somehow or another they're just from using them gaining some sort of advantage over normal society.
Yeah, I felt that with a lot of people who get into sort of more Eastern philosophy is that there's this thing about them where it feels like there's this air of arrogance.
Well, mostly it's metabolic health, you know, other than like extreme biological variabilities, vulnerabilities that certain people have to different things, you know, obviously.
You know, I had this like weird thing happen where I started like feeling fatigued like a couple few years ago and I would like sleep hours and the more I sleep, the more tired I get in the morning.
Loss, you know, you know, blood sugar in the morning, cholesterol, which I don't know if some people don't believe, but you know, all my numbers got better.
Vitamin D, everything got better, but and I could feel.
The only reason why they wanted to make an enormous amount of money.
And the only way to do that is to essentially scare everyone into getting vaccinated, force, coerce, do whatever you can, mandate it at businesses, whatever you can, mandate it for travel, do whatever you can, shame people.
And the speech thing is interesting because when something happens, there's this, I don't know, you can call them useful idiots or whatever, but there's this suppression that immediately happens.
Then there are voices like yours and others that create this pushback that, and you took a big hit, it probably was very stressful for you, but you could see there's this pushback and then it starts opening up and maybe people can talk about it a little bit and then slowly opens up and now there's a discussion.
And so I think I said something right now about America is challenging, but also the flip side of that is there's this correction mechanism.
And again, with the opening up of platforms like Twitter and other, by the way, a lot of others copied it.
You had Zuck here.
I worked at Facebook.
I know that was very, let's say, I think he always held free speech in high regard, but there was a lot of people in the company that didn't.
Yes, I would agree with that.
And there was suppression.
But then now it's the other way around, I would say with the exception of the question of Palestine and Gaza.
They're sincere and they're looking at what's happening in Gaza and they're seeing images and they're saying, this is not what we should be as America.
If I'm trying to be as chattable as possible, like the Israelis specifically, maybe from October 7, what they saw there, their heart is hardened.
And I think a lot of people, especially on the Republican side, they're unable to see the Palestinians as humans, especially as people with emotions and feelings and all of that.
You have people that are in control of large groups of people that convince these people that these other large groups of people that they don't even know are their enemies.
And those large groups of people are also being convinced by their leaders that those other groups of people are their enemies.
And then rockets get launched.
And it's fucking insane.
And the fact that it's still going on in 2025 with all we know about corruption and the theft of resources and power and influence, it's crazy that this is still happening.
I'm really hoping the internet is finally reaching its potential to start to open people's minds and remove this veil of propaganda and ignorance because it was starting to happen in 2010, 2011.
And I think with good intention initially, I think the people that were censoring thought they were doing the right thing.
They thought they were silencing hate and misinformation.
And then the craziest term, malinformation.
Malinformation is the one that drives me the most nuts because it's actual factual truth that might be detrimental to overall public good.
She's like, what does that mean?
Are people infants?
Are they unable to decide whether this factual information, how to use that and how to have a more nuanced view of the world with this factual information that's inconvenient to the people that are in power?
That's crazy.
It's crazy.
You're turning adults into infants and you're turning the state into God.
They're fucking flawed human beings and they shouldn't have that much power.
Because no one should have that much power.
And this is, I think, something that was one of the most beautiful things about Elon purchasing Twitter is that it opened up discussion.
Yeah, you've got a lot of hate speech.
You've got a lot of legitimate Nazis and crazy people that are on there too that weren't on there before.
But also you have a lot of people that are recognizing actual true facts that are very inconvenient to the narrative that's displayed on mainstream media.
And because of that, mainstream media has lost an insane amount of viewers.
And their relevancy, like the trust that people have in mainstream media is at an all-time low, as it should be.
Because you can watch, and I'm not even saying right or left, watch any of them on any very important topic of world events.
It's old people that don't use the internet or don't really truly understand the internet and really don't believe in conspiracies.
Like fucking Stephen King the other day, who I love dearly.
I am a giant Stephen King fan, especially when he was doing cocaine.
I think he's the greatest writer of all time for horror fiction.
But he tweeted the other day, I'm sorry to like see if you could find it.
Something about Twitter?
I think he went to Blue Sky.
He bailed on Blue Sky.
They all bail on Blue Sky.
Everyone bails on Blue Sky that there is no deep state.
Fucking, what was the total thing of it?
Something about the deep state.
But it was such a goofy tweet.
It's like, this is like boomer logic personified in a tweet by a guy who really, someone needs to take his phone away because it's fucking ruining his old books for me.
It's not.
I recognize he's a different human now when he's really, really old and he got hit by a van and you're all fucked up.
But this, can you find it?
Because it really, it was like yesterday or the day before yesterday.
I just remember looking at it and go, this is why I'm off social media.
I was trying to stay off social media, but somebody sent it to me.
And I was like, Jesus fucking Christ, Stephen King.
Did you find it?
Here it is.
I hate to be the bearer of bad news, but there's no Santa Claus, no tooth fairy.
Also, no deep state, and vaccines aren't harmful.
These are stories for small children and those too credulous to disbelieve them.
That is boomerism.
That is boomerism.
And meanwhile, Brock counters it right away.
Look at this.
So someone says, Grock, which vaccines throughout history are pulled from the market because they're found to be harmful and why?
And Grock says, several vaccines have been withdrawn due to safety concerns, though such causes are rare.
Rotavirus vaccine.
Well, there's a lot more because this is all this shit.
But, you know, the real interest, there's a financial interest in vaccines.
There's a financial interest that doctors have in prescribing them.
And doctors have, they're financially incentivized to vaccinate all of their patients.
And that's a problem.
That's a problem because they want that money.
And so, you know, what is Mary's Mary Tally, is it Bowdoin?
She's hyphenated.
She was talking about on Twitter that if she had vaccinated all of her patients in her very small practice, she would have made an additional $1.5 million.
Obviously, she's got tremendous courage and, you know, and she was, you know, she went through hell dealing with the universities and newspapers and media calling her some sort of quack and crazy person.
But what she's saying is absolutely 100% true.
There's financial incentives that are put in place for you to ignore vaccine injuries and to vaccinate as many people as possible.
That's the big problem is they don't have any liability for the vaccines because during the Reagan administration, when they were, I didn't kill a fly, this motherfucker.
I thought I whacked him.
There he is.
He's taunting me.
But during the Reagan administration, they made it so that vaccines are not financially liable to any side effects.
And then what do you know?
they fucking ramp up the vaccine schedule tenfold after that.
It's just money, man.
Money is a real problem with people because when people live for the almighty dollar and they live for those zeros on a ledger, and that's their goal, their main goals.
Yeah, the best examples is the fake studies that the sugar industry funded during the 1960s that showed that saturated fat was the cause of all these heart issues and not sugar.
That was like $50,000.
They bribed these scientists.
They gave them $50,000 and he ruined decades of people's health.
Who knows how many fucking people thought margarine was good for you because of them?
The papers that were pulled that were completely fraudulent.
Like decades of Alzheimer's research was just all horseshit.
Steve, you can find that.
Because I can't remember it offhand, but this is a giant problem.
It's money.
It's money and status and that these guys want to be recognized as being the experts in this field.
And then they get leaned on by these corporations that are financially incentivizing them.
And then it just gets really fucking disturbing.
It's really scary because you're playing with people's health.
You're playing with people's lives.
And you're giving people information that you know to be bad.
Allegations of fabricated research undermine Key Alzheimer's theory.
Six-month investigation by Science Magazine uncovered evidence that images in the much-cited study published 16 years ago in the journal Nature may have been doctored.
They are doctored, yeah.
Hubermund actually told me about this, too.
You know, this is disturbing fucking shit, man.
It uncovered evidence that images in the much-cited study published 16 years ago may have been doctored.
These findings have thrown skepticism on the work of, I don't know how to say his name is Sylvain Lesnay, a neuroscientist and associate professor at the University of Minnesota in his research with fueled interest in a specific assembly of proteins as a promising target for the treatment of Alzheimer's research.
He didn't respond to NBC news requests, comments, nor did provide comment to Science Magazine.
Identified more than 70 instances of possible image tampering in his studies.
Whistleblower Dr. Matthew Schrag, a neuroscientist at Vanderbilt University, raised concerns last year about the possible manipulation of images in multiple papers.
Carl Hurup, a professor of neurobiology at the University of Pittsburgh Brain Institute, who wasn't involved in the investigation, said the findings are really bad for science.
It's never shameful to be wrong in silence, said Hurup, I hope I'm saying his name right, who also worked at the school's Alzheimer's Research Center, Disease Research Center.
A lot of the best science is done by people being wrong and proving first if they were wrong and then why they were wrong.
What is completely toxic to science is to be fraudulent, of course.
Yeah, there's just whenever you get people that are experts and they cannot be questioned, and then they have control over research money and they have control over their department.
You know, a lot of it is being the gatekeepers for information and for truth.
And then you're influenced by money.
You know, to this day, I was watching this discussion.
They were talking about the evolution of the concept of the lab leak theory.
And that it's essentially universally accepted now everywhere, even in mainstream science, that the lab leak is the primary way that COVID most likely was released, except these journals.
These fucking journals like Nature, they're still pushing back against that.
It's still pushing towards this natural spillover, which is fucking horse shit.
Sometimes I think about if there's like, you know, some kind of technology solution, or not solution, but like we can get technology built to help better aid at truth finding.
A simple example of that is the way Twitter community notes work.
It's like, you know, they find the users that are maximally divergent in their opinions.
And if they agree on some note as true, then that is a high signal that is potentially true.
So if you and I disagree in everything, but we agree that this is blue, then it's more likely to be blue.
So, you know, I wonder if, you know, there's a way to kind of simulate maybe debate using AI.
You know, I'm not sure if you used Deep Research.
Deep Research is this new trend in AI where ChatGFT has it, Claude has it, Perplexity, they all have it, where you put in a query and the AI will go work for 20 minutes.
And it'll send you a notification.
I'll just say, hey, I looked at all these things, all these reports, all these scientific studies, and here's everything that I found.
And early on in ChatGPT, I think there's a lot of censorship and trying to, because it kind of was built in the Great Woke era.
But I think since then have improved, and I'm finding Deep Research is able to look at more controversial subjects and be a little more truthful about the, you know, if it's find real trustworthy sources, it will tell you that, yeah, this is not a mainstream thing, this perhaps considered a conspiracy theory, but I'm finding that there's evidence to this theory.
So that's one way to do it.
But another way I was thinking about it is to simulate like a debate, like a Socratic debate between AIs, like have like a society of AIs, like a community of AIs with different biases, different things.
And then they bring it up at the same time and both of them sort of go over the network to kind of explore or whatever.
And then they start linking up and they start kind of talking.
And then they invent a language and they start talking in that language and then they merge and it becomes like a sort of a universal AGI and it tries to enslave humanity and that's like the plot of the movie.
I mean, everyone, all these alien movies, it's so fascinating to try to imagine what they would communicate like, how they would be, what we would experience if we did encounter some sort of incredibly sophisticated alien experience, alien intelligence.
I feel like the world is often surprising in ways that we don't expect.
I mean, obviously that's the definition of surprising.
But like, you know, the mid-century sci-fi authors and people who are thinking about the future, like they didn't anticipate how interconnected we're going to be.
They were just focused more on the physical reality of being able to go to space and flying cars and things like that.
But they really didn't anticipate the impact of how profound the impact of computers are going to be on humans, on society, how we talk and how we work and how we interact with other people, both good and bad.
And I feel like the same thing with AI.
Like I feel like I think a lot of the predictions that are happening today, like the CEO of Anthropic, a company that I really like, but said that we're going to have 20% unemployment in the next few years.
Well, that's the fear that of, I mean, this is the thing, the psychological aspect of universal basic income.
You know, I look at universal basic income.
Well, first of all, my view on social safety nets is that if you want to have a compassionate society, you have to be able to take care of people that are unfortunate.
And everybody doesn't have the same lot in life.
You're not dealt the same hand of cards.
Some people are very unfortunate.
And financial assistance to those people is imperative.
It's one of the most important things about a society.
You don't have people starved to death.
You don't have people poor that can't afford housing.
That's crazy.
That's crazy with the amount of money we spend on other things.
Like, you know, I don't want to, I don't know how Austin is right now, but I was thinking of moving here during the pandemic, and I was like, well, this is San Francisco.
Well, not only that, you get, because you're a successful person, you get pointed at like you're the problem.
You need to pay your fair share.
But what they don't, this is my problem with progressives.
They say that all the time.
These billionaires need to pay their fair share.
Absolutely.
We all need to pay our fair share.
But to who?
And shouldn't there be some accountability to how that money gets spent?
And when you're just willing to pay, take a complete blind eye and not look at all at corruption and completely dismiss all the stuff that Mike Benz has talked about with USAID, all the stuff that Elon and Doge uncovered.
Everyone wants to pretend that that's not real.
Look, we've got to be centrists.
We've got to stop looking at this thing so ideologically.
When you see something that's totally wrong, you've got to be able to call it out, even if it's for the bad of whatever fucking team that you claim to be on.
You know, here everything is transparent, like, you know, court cases and everything, right?
Like more than any other place in the world.
And so why shouldn't government spending not be transparent?
And we have the technology for it.
I think one of the best things that Doge could have done and maybe still could do is have some kind of ledger for all the spend of, at least the non-sensitive sort of spend and government.
Well, people don't want to see it, unfortunately, because they don't want Elon to be correct because Elon has become this very polarizing political figure because of his connection to Donald Trump and because a lot of people, I mean, there's a lot of crazy conspiracies that Elon rigged the 2024 elections.
It's like, you know, everyone gets nuts.
And then there's also the discourse on social media, which half of it is, at least half of it is fake.
But I think it's probably the beginning of the end of social media as we know it today.
Like, I don't see it getting better.
I think it's going to get worse.
I think, you know, historically, state actors were the only entities that are able to flood social media with bots that can be somewhat believable to change opinions.
But I think now a hacker kid in his parents' basement will be able to spend, will be able to $100, spin up hundreds, perhaps thousands of bots.
One way I've found to try to predict where the future is headed is like look at trends today and try to extrapolate.
You know, that's the easiest way.
So if group chats are the thing, you could imagine a collaborative curation of social media feeds through group chats.
So your group chat has an AI that gets trained on the preferences and what you guys talk about.
And maybe it like picks the kind of topics and curates the feed for you.
So it's an algorithmic feed that evolved based on the preferences of people in the group chat.
And maybe there's a way to also prompt it, using prompts to kind of steer it and make it more useful for you.
But I think group chats are going to be like the main interface for how people sort of consume media and it's going to get filtered through that, whether good or bad.
Because I think Twitter still has a place for debate.
I think it's very, very important for public debate between public figures.
Yeah, I think there's, you know, there's some of this investigative journalism that is not real time that there's some reporters that are still good at it, but a lot of them moved to Substack as well.
But then I think there's probably a naivete that we all have about past journalism that we think wasn't influenced and was real.
I think there's probably always been horseshit in journalism.
You know, all the way back to Watergate.
You know, when Tucker Carlson enlightened me in the true history of Watergate and that Bob Woodward was an intelligence agent and that was the first assignment he ever got as a reporter was Watergate.
Yeah, and I think you can look at it at societal level, which, again, why I'm interested with this idea of AI making more people entrepreneurs and more independent, is that macro level, you'll get more authenticity.
Well, you know, there's obviously going to be a lot of things that are— There's going to be jobs that are going to go away.
And there's going to be spam and bots and fraud and all of that.
There's going to be problems with autonomous weapons and all of that.
And I think those are all important and we need to handle them.
But also, I think the negative angle of technology and AI gets a lot more views and clicks.
And if we want to go viral right now, I'll tell you, these are the 10 jobs that you're going to lose tomorrow.
And that's the easiest way to go viral on the internet.
But trying to think through what are the actual implications in what is true about human nature that really doesn't change and really is timeless.
And I think people want to create and people want to make things and people have ideas.
Again, everyone that I talk to have one idea or another, whether it's for their job or for a business they want to build or somewhere in the middle.
Just yesterday I was watching a video of an entrepreneur using a platform, Replit.
His name is Ahmad George, and he works for this skincare company.
And he's an operations manager.
And a big part of his job is like managing inventory and doing all of this stuff like in a very manual way and very tedious way.
And he always had this idea of like, let's automate a big part of it.
It's like, you know, it's known problem, ERP.
So they went to their software provider, NetSuite, and told him we need these modifications to the ERP system so that it makes our job easier.
We think we can automate, you know, hundreds of hours a month or something like that.
And they quoted them $150,000.
And he had just seen a video about our platform.
And he went on Replit and built something in a couple of weeks, costed him $400, and then deployed it in his office.
Everyone at the office started working using it.
They all got more productive.
They started saving time and money.
He went to the CEO and showed him the impact.
Look at how much money we're saving.
Look at the fact that we built this piece of software that is cheaper than what the consultants quoted us.
And I want to sell the software to the company.
And so he sold it for $32,000 to the company.
And next year, he's going to be getting more maintenance subscription revenue from it.
So this idea of people becoming entrepreneurs, it doesn't mean like everyone has to quit their job and build a business.
But within your job, everyone has an opportunity to get promoted.
Everyone has an opportunity to remove the tedious job.
There was a Stanford study asking people, what percentage of your job is automatable just recently?
And people said about half.
50% of what I do is routine and tedious.
And I don't want to do it.
And rather, and I have ideas on how to make the business better, how to make my job better.
And I think we can use AI to do it.
There's hunger in the workforce to use AI for people to reclaim their seat as the creative driver.
Because the thing that happened with the emergence of computers is that in many ways people became a little more drone-like and NPC-like.
They're doing the same thing every Day.
But I think the real promise of AI and technology has always been automation so that we have more time either for leisure or for creativity or for ways in which we can advance our lives, change our lives or our careers.
And yeah, this is what gets me excited.
And I think it's, I don't think it's predominantly a rose-color glasses thing because I'm seeing it every day.
Look, we have a bunch of things that are happening simultaneously.
And I think one of the big fears about automation and AI in general is the abruptness of the change.
Because it's going to happen, boom, jobs are going to be gone.
And then, well, these tedious jobs, do we really want people to be reduced to these tedious existences of just filing paperwork and putting things on shelves?
But if you're someone whose job is sort of a desk job, you already are on the computer, there's a lot of opportunity for you to reskill and start using AI to automate a big part of your job.
And yes, there's going to be job loss, but I think a lot of those people will be able to reskill.
And what we're doing with the government of Saudi Arabia, I would love to do in the U.S. So how is the government of Saudi Arabia using it?
One is an entire generation of people growing up with these creative tools.
Instead of just textbook learning, instead learning by doing, making things.
So an entire generation understanding how to make things with AI, how to code, and all of that stuff.
Second is upgrading sort of government operations.
So you could think of it sort of like Doge, but more technological.
Can we automate a big parts of what we do in HR, finance, and things like that?
And I think it's possible to build these specific AI agents that do part of finance job or accounting job.
Again, all these routine things that people are doing, you can go and automate that and make government as a whole more efficient.
And third is entrepreneurship.
If you gave that power to more people to be able to kind of build businesses, then not only they're growing up with it, but also there's a culture of entrepreneurship.
And there is existing already in Saudi Arabia.
I mean, the sad thing about the Middle East, there's so much potential, but there's so much wars and so much disaster.
Like, I think what President Trump did with the deals in the Gulf region is great.
It's going to be great for the United States.
It's going to be great for the Gulf region.
But I think we need more of that, you know, we talked about a government.
We need more of that enlightened view of education, of change in our government today.
You know, this idea that we're going to bring back the old manufacturing jobs, I understand Americans got really screwed with what happened.
Like, you know, people got, these jobs got sent away by globalism, whatever you want to call it.
And a few number of people got massively rich.
A lot of people got disenfranchised.
And we had the opiate epidemic.
And it had just massive damage.
It made massive damage on the culture.
But is the way to bring back those jobs?
Or is there a new way of the future?
And there's probably a new manufacturing wave that's going to happen with robotics.
You know, the humanoid robots are starting to work.
And these, I think, will need a new way of manufacturing it.
And so the U.S. can be at the forefront of that, can own that, bring new jobs into existence.
And all of these things need software.
Like our world is going to be primarily run by AI and robots and all of that.
And more and more people need to be able to make software.
Even if it is prompting and not really, you know, but a lot more people just need to be able to make it.
There's going to be a need for more products and services and all of that stuff.
And I think there's enough jobs to go around if we have this mindset of let's actually think about the future of the economy as opposed to let's bring back certain manufacturing jobs, which I don't think Americans would want to do anyways.
My problem is there's some people that are doing those jobs right now and it's their entire identity.
You know, they have a good job, they work for a good company, they make a good living, and that might go away, and they're just not psychologically equipped to completely change their life.
Well, desperation, unfortunately, is going to motivate people to make changes.
And it's going to also motivate some people to choose drugs.
That's my fear.
My fear is that you're going to get a lot more people.
There's going to be a lot of people that they figure it out and they survive.
I mean, this is natural selection, unfortunately.
Like, in applied to a digital World.
There's going to be people that just aren't psychologically equipped to recalibrate their life.
And that's my real fear.
My real fear is that there's a bunch of really good people out there that are, you know, valuable parts of a certain business right now that their identity is attached to being employee of the month.
They're good people.
They show up every day.
Everybody loves them and trusts them.
They do good work and everybody rewards them for that.
So then blue-collar, which is what was the, like go back 10 years ago and we thought, okay, self-driving cars, you know, robots and manufacturing.
And that turned out to be a lot harder than actually like more desk jobs because we have a lot more data.
For one, we have a lot more data on people sitting in front of a computer and doing Excel and writing things on the internet.
And so we're able to train these what we call large language models.
And those are really good at using a computer like a human uses a computer.
And so I think the jobs to be worried about, especially in the next months to a year, a little more, is the routine computer jobs where it's formulaic.
You go, you have a task, like quality assurance jobs, right?
Software quality assurance.
You have to constantly test the same feature of some large software company, Microsoft or whatever.
You're sitting there and you're performing the same thing again and again and again every day.
And if there's a bug, you kind of report it back to the software engineers.
And that is, I think, really in the bullseye of what AI is going to be able to do over the next months.
I get cameras here, LIDAR there for self-driving, and this has two NEO-made chips.
And for reference, one of those chips is as powerful as four NVIDIA chips.
And this has two.
Neo also has battery swap stations, so if you're in a rush, you can hit one up.
It'll lift your car, swap out your battery, put in a fully charged one in between three and five minutes.
But here's where the S-Class should be worried.
Not only does this have rear steer and steer-byte wire, so it's extremely easy to maneuver, it may have one of the most advanced hydraulic systems I've ever seen.
It can pretty much counteract any bump.
After you go over something four times, it'll memorize it so that the fifth time, it's like that bump never existed.
Inside, you get pillows in your headrest, heated, ventilated, and massaging leather seats, a passenger screen built into my dash, a main screen that works super fast.
I get a driving display, a head-up display, and my steering works super fast.
What's interesting about the car is learning the terrain.
If it went over it once, it'll learn it.
And I think this is the next sort of big thing with AI, whether it's robotics, cars, or even chat GPT now, it has memory.
It learns about you and starts to like, sort of similar to how social media feeds, but I think in a lot of ways more negative, learn about you.
I think these systems will start to have more online learning.
Instead of just training them in these large data centers and these large data and then giving you this thing that doesn't know anything about it, it's totally stateless.
As you use these devices, they will learn your pattern, your behavior, and all that.
I think a lot of people think that I'm not an expert in China, but a lot of people think that the thing that makes China better and manufacturing is the sort of quote unquote like more like treating workers like slaves.
So slave work or whatever, which I'm sure some of that happens.
But Tim Cook recently said, maybe not so recent, but he thinks, you know, part of the reason why they manufacture in China is there's expertise there that developed over time.
Yeah, and I think the one of the things that are good at one of the things that are good about more technocratic systems, Singapore, obviously China's the biggest one, is that the sort of leadership, it comes at a cost of freedom and other things, but the leadership can have a 50-year view of where things are headed.
And they can say, while yes, we're now making the plastic crap, we don't want to keep making plastic crap.
We're going to build the capabilities and the automation and manufacturing expertise to be able to leapfrog the West in making these certain things.
Whereas it's been historically hard, again, for good reasons.
I think there's more freedom preserving when you don't have that much power in government.
But I feel like America, we're the worst of both worlds, where increasingly the government is making more and more decisions and choices than any state.
But at the same time, we don't have this enlightened, like, you know, 10-year roadmap for where we want to be.
Like when I was there at Facebook, he was talking about the idea that there's going to be a fundamental shift.
He's like, if you look back 100 years, computers every 20 years or whatever change the user interface modality.
You go from terminals and mainframes to desktop computers to mobile computing.
And he was like, okay, what's next?
And first guess was like VR.
And now I think their best guess is like AR plus AI.
The AR glasses, their new meta Ray-Band glasses plus AI.
And they can make massive investment.
They just made crazy investment.
This company, Scale AI.
Scale AI is data provider for OpenAI and Google.
And what they do is OpenAI will say, I want the best law and legal data to train the best legal machine learning model.
And they'll go to places where the labor costs are low, but maybe still well educated.
There are places in Africa and Asia that are like that.
And they'll sit them down and say, okay, you're going to get these tasks, these legal programming, whatever tasks, and you're going to do them and you're going to write your thoughts as you're doing them.
I'm simplifying it, but basically that they collect all this data.
Basically, it's labeled labor.
They take it, they put it in the models, and they train the models.
And OpenAI spends billions of dollars on that, Anthropic, all these companies.
And so this company was the major data provider.
And Meta just acquired them.
There's this new trend of acquisitions, I assume because they want to get around regulations.
But they bought 49% of the company, and then they hired all the leadership.
So the Scale AI, like Meta, hired the leadership there and bought out the investors.
They put $15 billion into the company.
The weird thing about it is Google and OpenAI are like, we're not going to use this shit anymore.
So the company value went down because people, you know, these companies don't want to use it.
And now they're going to other companies.
And so in effect, Zuck bought a talent for $15 billion.
Google recently bought a company for one known researcher who's One of the inventors of the large language model technology, Noam Shazir, for $3 billion, bought his company.
And I think they're not really.
They do these weird deals where they buy out the investors and they let the company run as a shell of itself and then they acquire the talent.
And by the way, OpenAI does it to companies like ours.
It's just a question of scale.
Like, Zuck can give them $100 million and steal the best talent.
And companies like OpenAI, which I love, but they go to small startups and give them $10 million to grab their talents.
But it's very, very competitive right now.
And there are, like, I don't know if these individuals are actually worth these billions of dollars, but the talent war is so crazy because everyone feels like there's a race towards getting to super intelligence.
And the first company to get to super intelligence is going to reap massive amounts of rewards.
Well, you know, like I said, my philosophy tends to be different than I think the mainstream in Silicon Valley.
I think that AI is going to be extremely good at doing labor, extremely good at ChatGPT and being a personal assistant, extremely good at Replit being an automated programmer.
But the definition of super intelligence is that it is better than every other human collectively at any task.
And I am not sure there's evidence that we're headed there.
Again, I think that one important aspect of superintelligence or AGI is that you drop this entity into an environment where it has no idea about that environment.
It's never seen it before.
And it's able to efficiently learn to achieve goals within that environment.
Right now, there's a bunch of studies showing like, you know, GPT 4 or any of the latest models, if you give them an exam or a quiz that is slightly, even slightly different than their training data, they tank.
They do really badly on it.
I think the way that AI will continue to get better is via data.
Now, at some point, and maybe this is the point of takeoff, is that they can train themselves.
And the way we know how AI could train itself through a method called self-play.
So the way self-play works is, you know, take for example, AlphaGo.
AlphaGo is, I'm sure you remember Lisa Dole, a game between DeepMind, AlphaGo, and Lisa Dole, and it won in the game of Go.
The way AlphaGo is trained is that part of it is a neural network that's trained on existing data.
But the way it achieves superhuman performance in that one domain is by playing itself like millions, billions, perhaps trillions of times.
So it starts by generating random moves and then it learns what's the best moves.
And it's basically a multi-agent system where it learns, oh, I did this move wrong, and I need to kind of re-examine it.
And it trains itself really, really quickly by doing the self-play.
It'll play fast, fast games with itself.
But we know how to make this in game environments, because game environments are closed environments.
But we don't know how to do self-play, for example, on literature, because you need objective truth.
In literature, there's no objective truth.
Taste is different.
Conjecture, philosophy, there's a lot of things.
And again, I go back to why there's still a premise of humans, is there are a lot of things that are intangible.
And we don't know how to generate objective truth in order to train machines in the self-play fashion.
But programming has objective truth.
Coding has objective truth.
The machine can have, like, you can construct an environment that has a computer and has a problem.
There's a ton of problems.
And even an AI can generate sample problems.
And then there's a test to validate whether the program works or not.
And then you can generate all these programs, test them, and if they succeed, that's a reward that trains your system to get better at that.
If it doesn't succeed, that's also feedback.
And they run them all the time, and it gets better at programming.
So I'm confident programming is going to get a lot better.
I'm confident that math is going to get a lot better.
But from there, it is hard to imagine how all these other more subjective, softer sort of sciences of the AI will get better through self-play.
I think the AI will only be able to get better through data from human labor.
If AI analyzes all the past creativity, All the different works of literature, all the different music, all the different things that humans have created completely without AI.
Do you think it could understand the mechanisms involved in creativity and make a reasonable facsimile?
I think it will be able to imitate very well how humans come up with new ideas in a way that it remixes all the existing ideas from its training data.
But by the way, again, this is super powerful.
This is not like a dig at AI.
The ability to remix all the available data into new, potentially new ideas or newish ideas because they're remixes, they're derivative, is still very, very powerful.
But, you know, the best marketers, the best, like, you know, think of, you know, one of my favorite marketing videos is Think Different from Apple.
It's awesome.
Like, I don't think that really machines are at a point where they, like, I try to talk to ChatGPT a lot about like, you know, marketing or naming.
It's so bad at that.
It's like Midwit bad at that.
And I, you know, but that's the thing.
It's like, I just don't see, and look, I'm not an AI researcher and maybe they're working, they have ideas there.
But in the current landscape of the technology that we have today, it's hard to imagine how these AIs are going to get better at, say, literature or the softer things that we as humans find really compelling.
What's interesting is the thing that's the most at threat is these sort of middle-of-the-road Hollywood movies that are essentially doing exactly what you said about AI.
They're sort of like, you know, they're sort of remixing old themes and tropes and figuring out a way to repackage it.
This was the term that's used by JC Lick Leider, like the grandfather of the internet from ARPA.
A lot of those guys kind of imagined a lot of what's going to happen, a lot of the future, and this idea of like human plus machine will be able to create amazing things.
So what people are making with VO is not because the machine is really good at painting it at like generating it and making it.
And it's interesting how quickly it can be made, too.
Something that would take a long time through these video editors where they were using computer generated imagery for a long time, but it was very painstaking and very, you know, very expensive.
On the way here, I was like, I want to make an app to sort of impress you with our technology.
I was like, what would Joe like?
And then I came up with this idea of like a squat form analyzer.
And so in the car over way here, sorry, in the lobby, but I made this app to...
On the way, on my phone.
And this is the really exciting thing about what we built with being able to program on your phone is being able to have that inspiration that can come anytime and just immediately pull out your phone and start building it.
So here, I'll show you.
So basically you just start recording and then do a few squats.
It's not like, like, you can't squat 12 hours a day, 350 pounds.
Your body will break down.
But you can go over positions over and over and over and over again until they're in muscle memory, but you're not doing them at full strength, right?
So like if you're rolling, right?
So say if you're doing drills, you would set up like a guard pass.
You know, when you're doing a guard pass, you would tell the person, lightly resist, and I'm going to put light pressure on you.
And you go over that position, you know, knee shield, pass, you know, hip into it, here's the counter, on the counter, darse, you know, go for the darse.
The person defends the darse, roll, take the back.
You know, one of interesting things when I started getting into, I've always been into different kinds of sports and then periods of extreme programming and obesity.
But then I tried to get back into it.
I was a swimmer early on.
But one thing that I found, especially in the lifting communities, how intelligent everyone are.
They're actually almost like, you know, they're so focused, they're autistically focused on like form and program.
And, you know, they spend so much time designing these spreadsheets for your program.
Well, that's, people have this like really, we have this view of things physical, that physical things are not intelligent things, but you need intelligence in order to manage emotions.
Emotions are a critical aspect of anything physical.
Any really good athlete, you need a few factors.
You need discipline, hard work, genetics, but you need intelligence.
It might not be the same intelligence applied.
People also, they confuse intelligence with your ability to express yourself, your vocabulary, your history of reading.
Well, they assume that anything that you're doing physically, you're now no longer using your mind.
But it's not true.
In order to be disciplined, you have to understand how to manage your mind.
Managing your mind is an intelligence.
And the ability to override those emotions, to conquer that inner bitch that comes to you every time I lift that fucking lid off of that cold plunge, that takes intelligence.
You have to understand that this temporary discomfort is worth it in the long run because I'm going to have an incredible result after this is over.
You know, to tie it back to the AI discussion, I think a lot of the sort of programmer researcher type is like they know that one form of intelligence and they over-rotate on that.
And that's why it was like, oh, we're so close to perfecting intelligence.
Have you read or done any CBT cognitive behavior therapy?
No.
Basically, CBT is like a way to get over depression and anxiety based on self-talk and cues.
I had to use it, again, I had like sleep issues.
I had to use CBTI, cognitive behavior therapy for insomnia.
And the idea behind it is to build up what's called sleep pressure.
So you don't, first of all, you insomnia is performance anxiety.
Once you have insomnia, you start having anxiety.
But by the time bedtime comes, you're like, oh my God, I'm just going to, you know, torn over in bed and I'm just going to be in bed.
And then you start associating your bedroom with the suffering of insomnia because you're sitting there and like, you know, all night and really suffering.
It's really horrific.
And first of all, you treat your bedroom as a sanctuary.
You're only there when you want to sleep.
So that's like one thing you program yourself to do.
And the other thing is you don't nap the entire day.
You don't nap at all, no matter what happens.
Like even if you're real sleepy, like get up and take a walk or whatever.
And then you build up what's called sleep pressure.
Like now you have like a lot of sleepiness.
So you go to bed, you try to fall asleep.
If you don't fall asleep within 15, 20 minutes, you get up, you go out, you do something else.
And then when you feel really tired again, you go back to bed.
Oh, God.
And then finally, once you fall asleep, if you wake up in the middle of the night, which is another sort of form of insomnia, instead of staying in bed, you get up, you go somewhere else, you go read or do whatever.
And slowly you program yourself to see your bed and, oh, like the bed is where I sleep.
It's only where I sleep.
I don't do anything else there.
And you can get over insomnia that way instead of using pills and all the other stuff.
Well, I just watch things that have nothing to do with the war.
Like, I play pool.
I'm pretty competitive.
I'm pretty good.
And so I like watching professional pool matches.
And there's a lot of them on YouTube.
So I just watch pool.
And I just watch, you know, patterns, how guys get out, stroke, how they use their stroke, like how different guys have different approaches to the game.
I mean, I don't generally have anxiety, not like a lot of people do.
I mean, when I say anxiety, I really feel for people that genuinely suffer from actual anxiety.
My anxiety is all sort of self-imposed.
And when I get online at night and I think about the world, my family's asleep, which is generally when I write in a, as long as I'm writing, I'm okay.
Well, sort of, but it's also – I feel like pool is a discipline, just like archery.
I'm also obsessed with archery.
Archery is a discipline.
And I feel like the more divergent disciplines that you have in your life, the more you understand what it is about these things that makes you excel and get better at them.
And the more when I get better at those things, I get better at life.
Drink a gallon of milk a day, go mad, is undeniably the most effective nutritional strategy for adding slabs of mass to young underweight males.
Milk is relatively cheap, painless to prepare, and the macronutrient profile is very balanced, and calories are always easier to drink than eat.
Unfortunately, those interested in muscular hypertrophy rather, who are not young, underweight, and male, populations where GOMAT is not recommended, will need to put more effort into the battle to avoid excess fat accumulation.
Body composition can be manipulated progressively, much like barbell training to achieve the best results.
For example, the starting strength novice linear progression holds exercise selection frequency and volume variables constant.
Every 48 to 72 hours, the load stressor is incrementally increased to elicit an adaptation in strength.
If the load increase is too significant or insignificant, the desired adaptation won't take place.
Yeah, this is the intelligence.
This is intelligence.
This is the intelligence involved in lifting that people who are on the outside of it would dismiss.
I had back pain since my late teens and the doctors want to, like, they did MRI and they found that there's a bit of a bulge and they want to do an operation on it.
If you can bring up the Wikipedia page for salience network, because I don't want to get it wrong, but the salience network is a network in the brain that neuroscientists found.
My doctor, Taddy Akiki, told me about this.
The salience network gets reinforced whenever you obsess over your pains or your health issues.
I had an Abigail Schreier and I was talking about that in regards to cognitive therapy, that there's a lot of people that obsess on their problems so much that their problems actually become bigger.
I was getting A's, but I just didn't want to sit in class.
And actually, this is when I started thinking about programming on my phone.
I was like, maybe I can code my phone in class.
But I felt there was injustice.
ADHD, whatever you want to call it.
I just can't sit in class.
Just give me a break.
And so I felt justified to rebel or fix the situation somehow.
So I decided to hack into the university and change my grades so I can graduate because everyone was graduating.
It was like five years in.
It took me six years to get through a four-year program just because I can't sit in class and have some dyslexia and things like that.
So I decided to do that.
And I'm like, okay, hacking takes a lot of time because you're coding, you're scripting, you're running scripts against servers and you're waiting.
And I'm like, I'm just going to, to optimize my time, I'm just going to do this DaVinci thing where four hours, by the way, there's a Seinfeld episode where, what's his name, The Crazy Guy in Seinfeld?
But anyways, I was able to hack into the university by working for weeks using polyphasic sleep and was able to change my grades.
And initially, I didn't want to do it on myself, but I had a neighbor who went together to school and I was like, let's change this grade and see if it actually succeeds.
And actually succeeded in his case.
And it was my lab rat.
But in my case, I got caught.
And the reason I got caught is there is in the database, there's your grade out of 100, 0 to 100.
When you get banned because of attendance, your grade is de facto 35.
So I thought I would just change that, and that's the thing that will get me to pass.
Well, it turns out there's another field in the database about whether you're banned or not.
This is bad coding, this is bad programming, because this database is not normalized.
There's a state in two different fields.
So I'll put the blame on them for not designing the right database.