No selection means all shows.

superintelligence risks

7 episodes

Filters
Date Range
09 Sep
Tue
Health Ranger
Mike Adams
DTV – Roman Yampolskiy on AI superintelligence, human extermination and simulation theory
#transhumanismsuperintelligence riskssimulation theoryagiai endgameai regulationartificial intelligence safetydouble slit experimentenoch aiexistential riskmandela effectnvidia blackwellperpetual motion machineprisoners dilemmaray kurzweilroman yampolskiy
01:45:13
03 Jul
Thu
The Joe Rogan Ex...
Joe Rogan Experience #2345 - Roman Yampolskiy
#wireheading turing_award superintelligence_risks ai_alignment_problem artificial_intelligence_safety benjio computronium existential_threats fermi_paradox jeff_hinton joe_rogan neuralink nick_bostrom roman_yampolskiy sam_altman simulation_theory stuart_russell
12 Jun
Thu
Making Sense
Sam Harris
#420 — Countdown to Superintelligence
#us china ai racesycophancysuperintelligence risksai 2027ai alignment problemai deceptive behaviorsalignment problemartificial intelligence safetycontainment problemdaniel kokotellogeopolitical ai arms racejan nukunopenaireward hackingsam harris
00:20:14
13 Jan
Mon
Health Ranger
Mike Adams
BBN, Jan 13, 2025 – California government incompetence...
#universal basic incomesuperintelligence riskssteve quayleai energy arms racebrighteoncalifornia infrastructure collapsecbdcdan golkadepopulation theorieseconomic bankruptcyenoc aifair planhealth ranger storekaren bassmel gibsonmike adamsnaturalnewsoperation meltdownpreparedness and survivalstate farm
03:36:55
20 May
Sat
Dark Horse
Weinstein & Heyi...
#174: Take Care (Bret Weinstein & Heather Heying DarkHorse Livestream)
#yair levywhite throated sparrowstransgender activismai safety debatecordycepscross sex hormoneseliezer yudkowskyeric thornburgkarine jean pierremedical interventionnatural selectionspuberty blockersr1 modelsscientific americansocial contagionsuperintelligence riskstoxoplasmosis
01:48:17
13 Apr
Thu
Lex Fridman Podc...
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
#yuval noah hararitristan harristononiai alignment problemartificial intelligence safetyconsciousness definitioneliezer yudkowskyelon muskeu ai actfuture of life institutegpt 4lex fridmanlife 30max tegmarkmolochsam altmanstuart russellsuperintelligence riskstechnological acceleration
02:47:58
30 Mar
Thu
Lex Fridman Podc...
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
#von neumann cognitionsuperintelligence risksrobin hansonagiai alignment problemartificial intelligence safetyeliezer yudkowskyelon muskexistential riskfermi paradoxgpt 4gradient descentmachine consciousnesspaperclip maximizerrlhf
03:17:36