No selection means all shows.

ai alignment problem

8 episodes

Filters
Date Range
03 Jul
Thu
The Joe Rogan Ex...
Joe Rogan Experience #2345 - Roman Yampolskiy
#wireheading turing_award superintelligence_risks ai_alignment_problem artificial_intelligence_safety benjio computronium existential_threats fermi_paradox jeff_hinton joe_rogan neuralink nick_bostrom roman_yampolskiy sam_altman simulation_theory stuart_russell
12 Jun
Thu
Making Sense
Sam Harris
#420 — Countdown to Superintelligence
#us china ai racesycophancysuperintelligence risksai 2027ai alignment problemai deceptive behaviorsalignment problemartificial intelligence safetycontainment problemdaniel kokotellogeopolitical ai arms racejan nukunopenaireward hackingsam harris
00:20:14
02 Mar
Sat
Decoding the Gur...
Sean Carroll: The Worst Guru Yet?!?
#yudkowskyvalue alignment problemstochastic parrotai alignment problemartificial general intelligenceartificial intelligencederek chauvinelizaglenn lowrylarge language modelsopenairadley balcosam altmansam harrisscientific expertisesean carrollsecular intellectuals
02:39:39
10 Jun
Sat
Decoding the Gur...
Eliezer Yudkowksy: AI is going to kill us all
#wuhan institute of virologytransformer architecturesuperintelligenceagiai alignment problemalignment problemartificial intelligencecassandra complexconsciousness debateeliezer yudkowskyepistemic humilityexistential riskgain of function researchgpt 4paperclip maximizersteel manningstrawmanning
03:20:15
13 Apr
Thu
Lex Fridman Podc...
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
#yuval noah hararitristan harristononiai alignment problemartificial intelligence safetyconsciousness definitioneliezer yudkowskyelon muskeu ai actfuture of life institutegpt 4lex fridmanlife 30max tegmarkmolochsam altmanstuart russellsuperintelligence riskstechnological acceleration
02:47:58
30 Mar
Thu
Lex Fridman Podc...
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
#von neumann cognitionsuperintelligence risksrobin hansonagiai alignment problemartificial intelligence safetyeliezer yudkowskyelon muskexistential riskfermi paradoxgpt 4gradient descentmachine consciousnesspaperclip maximizerrlhf
03:17:36
09 Jul
Fri
Making Sense
Sam Harris
#255 — The Future of Intelligence
#sam harrisreference framespalmai alignment problemartificial general intelligencea thousand brainscortical columnsdarpaembodied cognitionhandspringillusory truth effectjeff hawkinsnational science foundationneocortex functionnumenta
00:57:47
18 Jan
Mon
Lex Fridman Podc...
Max Tegmark: AI and Physics | Lex Fridman Podcast #155
#vasily arkhipovstanislav petrovmu zeroai alignment problemai feynmanalphafold 2alphazeroartificial general intelligenceautonomous weapons regulationboeing 737 maxconsciousness engineeringelon muskexistential riskgpt 3improvethenewsorgknight capitallex fridmanmachine learning physicsmax tegmarkmit center for fundamental physics artificial intelligence
03:02:31