All Episodes
March 31, 2011 - Freedomain Radio - Stefan Molyneux
54:55
1881 Science, Reason, Evidence - and Freedomain Radio

News from the latest in science, and its relationship to rational philosophy. Sources: http://www.fdrurl.com/edge

| Copy link to current segment

Time Text
Hey everybody, Steph, hope you're doing well.
It is Pinch Punch, end of the month, March 31st, 2011.
And recently, there was a symposium at edge.org on, I think, 150, 160 scientists.
Responded to the question, what scientific concepts should everyone's cognitive toolbox hold?
And I thought it was very interesting.
And I'm not going to read all of them.
Some of them I thought were good.
Some of them I thought were less good.
You know, just my opinion.
I thought it was worth having a look at them to see what's missing.
So here's one, for instance, I thought was good.
We've talked about this before.
Hambert Gardner, psychologist, Harvard University, Harvard, author, truth, beauty, and goodness reframed, educating for the virtues in the 21st century.
So his scientific thought that everyone should have in their thinking box is, how would you disprove your viewpoint?
And he writes, thanks to Karl Proper, we have a simple and powerful tool, the phrase, how would you disprove your viewpoint?
In a democratic and demotic society like us, the biggest challenge to scientific thinking is the tendency to embrace views on the basis of faith or of ideology.
A majority of Americans doubt evolution because it goes against their religious teachings, and at least a sizable minority are skeptical about global warming, or more precisely the human contributions to global change, because efforts to counter climate change would tamper with the free market.
Quote free market. Popper popularized the notion that a claim is scientific only to the extent that it can be disproved, and that science works through perpetual efforts to disprove claims.
If American citizens, or for that matter citizens anywhere, were motivated to describe the conditions under which they would relinquish their beliefs, they would begin to think scientifically, and if they admitted that empirical evidence would not change their minds, then at least they'd have indicated that their views have a religious or an ideological rather than a scientific basis.
Well, I think that's interesting.
I mean... Efforts to counter climate change would tamper with the, quote, free market.
Of course, that's his ideology, right?
Efforts to counter climate change, if it were man-made, could be entirely around dismantling government controls and powers, right?
But this is just... People who describe limitations in other people's thinking almost always, always, always show the limitations in their own thinking.
And I'm sure I've been guilty of myself.
Of course, it's hard for me to see.
I try to remind myself not to.
But I think that's great.
I mean, I've gone through... I think maybe four major revolutions intellectually in my life.
The first was religion to atheism.
The second was socialism to libertarianism.
The third was libertarian to anarchism.
And the fourth was politics to the family, in terms of the true furnace of social change.
And these were all very, very difficult things for me.
And of course, giving up a career of 15 years to go podcast for a quote living.
They're all very hard. So yeah, I think being able to give up cherished beliefs, find the limitations in your intellectual heroes is wonderful.
Wonderful. It's tough as hell.
But it's wonderful. And this guy is talking about how other people's thinking is tragically limited.
And he says, well, people dislike man-made climate change because it will interfere with the free market.
But that's his ideology.
That government must necessarily solve it.
But people can't see that, right?
The next one I'd like to read is from Haim Haradi.
He's a physicist, former president of the Weissman Institute of Science.
He's an author of A View from the Eye of the Storm.
His contribution is The Edge of the Circle.
My concept is important, useful, scientific, and very appropriate for edge.
But it does not exist.
It is the edge of the circle.
Circle, circle. We know that a circle has no edge, and we also know that when you travel on a circle far enough to the right or to the left, you reach the same place.
Today's world is gradually moving towards extremism in almost every area.
Politics, law, religion, economics, education, ethics, you name it.
This is probably due to the brevity of messages, the huge amounts of information flooding us, the time pressure to respond before you think, and the electronic means, Twitter text messages, which impose superficiality.
Only extremist messages can be fully conveyed in one sentence.
In this world, it often appears that there are two corners of extremism, atheism and religious fanaticism.
Far-right and far-left in politics, suffocating, bureaucratic, detailed regulatory rules are a complete laissez-faire.
No ethical restrictions in biology research and absolute restrictions imposed by religion.
One can continue with dozens of examples.
But in reality, the extremists in the two edges always end up in the same place.
Hitler and Stalin both murdered millions and signed a friendship pact.
Far-left secular atheist demonstrators in the Western world, including gays and feminists, support Islamic religious fanatics who treat women and gays as low animals.
It has always been known that no income tax and 100% income tax yield the same result.
No tax collected at all, as shown by the famous Laffercurve.
This is the ultimate meeting point of the extremist supporters of tax increase and tax reduction.
I'm sorry, I just have to interrupt to mention that this is what happens when you take the Aristotelian mean and apply it to too many wide things.
Too much murder, too little murder.
Because saying that no income tax and 100% income tax yield the same result, no tax collected at all.
Is not helpful.
I mean, that's like saying, if we kill all our slaves and we free all our slaves, both solutions end up with no slavery.
Because in one, all the slaves are dead, so there's no slavery left.
In the other one, all the slaves are free, so there's no such thing as slavery.
And saying, well, because they end up with the same thing, they're morally equivalent.
Well, I mean, how do you even argue something like that?
So he goes on to say, societies preaching for absolute equality among their citizens always end up with the largest economic gaps.
Fanatic extremist proponents of developing only renewable energy sources with no nuclear power delay or prevent acceptable interim solutions to global energy issues, just as much as the oil producers.
Misuse of animals in biology research is as damaging as the objections of fanatic animal rights groups.
One can go on and on with illustrations which are more visible now than they were a decade or two ago.
we live on the verge of an age of extremism.
So the edge of the circle is the place where all these extremists meet, live, and preach.
The military doctor who refuses to pay orders, quote, because Obama was born in Africa, and the army doctor who murdered 12 people in Texas are both at the edge of the circle.
If you are a sensible, moderate-thinking person, open any newspaper and see how many times you will read news items or editorials which will lead you to say, wow, these people are really at the edge of the circle.
Now what's sad about this guy is that he just uses extremist language himself to paint people as extremists, right?
So he says atheism and religious fanaticism, right?
So he's used the word fanaticism and said, well, fanatics are extreme.
But this fanaticism is a synonym for extremism.
So far left, far right, Islamic religious fanatics.
He's just using fanatic extremist proponents of developing.
He's just using Words that are synonyms for extremism and fanaticism and saying, well, you see, because I've used these words, these people are extremists and fanatics.
It's very sad. It's just an argument by adjective.
That's no good.
Christian Kaisers, neuroscientist, scientific director, neuroimaging center, university medical center, Groningen.
The mirror fallacy. With the discovery of mirror neurons and similar systems in humans, neuroscience has shown us That when we see the actions, sensations and emotions of others, we activate brain regions as if we were doing similar actions.
We're touched in similar ways or made similar facial expressions.
In short, our brain mirrors the states of the people we observe.
Intuitively, we have the impression that while we mirror, we feel what is going on in the person we observe.
We empathize with him or her.
When the person we see has the exact same body and brain as we do, mirroring would tell us what the other feels.
Whenever the other person is different in some relevant way, however, mirroring will mislead us.
Imagine a masochist receiving a whiplash.
Your mirror system might make you feel his pain.
Sorry, your mirror system might make you feel his pain because you would feel pain in his stead.
What he actually feels, though, is pleasure.
You committed the mirror fallacy of incorrectly feeling that he would have felt what you would have felt, not what he actually felt.
The world is full of such fallacies.
We feel dolphins are happy just because their face resembles ours while we smile, or we attribute pain to robots in sci-fi movies.
We feel an audience in Japan fail to like a presentation we gave because their poise would be our boredom.
Labeling them and realizing that the way we interpret the social world is through projection might help us reappraise these situations and beware.
Well, I like that because I've talked about that and the difference between sympathy and empathy.
Empathy is correctly experiencing what other people feel.
Sympathy is recognizing that you in this situation would feel the same thing.
So, as I've talked about before, a woman being stalked by some stranger down a Lonely Ali would feel empathy for his aggression, but not sympathy for it.
So I think that empathy is very important, but it's not to be confused with sympathy, which is the projection of our own experiences into other people's experiences.
So I think that was a very good point and something very, very important to remember and something that I consistently don't remember as well as I'd like to.
Now this is a long but good one from George Lakoff, cognitive scientist and linguist, author of The Political Mind.
He says, conceptual metaphor.
Conceptual metaphor is at the center of a complex theory of how the brain gives rise to thought and language, and how cognition is embodied.
All concepts are physical brain circuits deriving their meaning via neural cascades, Determinate in linkage to the body.
That is how embodied cognition arises.
He'll explain it a bit more.
Primary metaphors are brain mappings linking disparate brain regions, each tied to the body in a different way.
For example, more is up, as in prices rose, links a region coordinating quantity to another coordinating verticality.
The neural mappings are directional, linking frame structures in each region.
The directionality is determined by first spike-dependent plasticity.
Primary metaphors are learned automatically and unconsciously by the hundreds prior to metaphoric language just by living in the world and having disparate brain regions activated together when two experiences repeatedly co-occur.
Complex conceptual metaphors arise via neural bindings.
Sorry, before we go on, I think what it means here is something like when you're a little kid, you look up at your parents all the time, and so looking up is power, which is why judges sit higher up, kings have their thrones, and tall people tend to do better.
In situations of conflict or in business.
So, you know, you get this metaphor that looking up is to be in a subservient position and looking down is to be in a dominant position and that just gets hardwired in our brains.
I think that's what he's talking about, and if anybody knows any better or differently, please let me know.
He goes on. Complex conceptual metaphors arise via neural bindings, both across metaphors and from a given metaphor to a conceptual frame circuit.
This guy's not the friendliest writer in the world, but...
So let's see what he explains about that.
Because conceptual metaphors unconsciously structure the brain's conceptual system, much of normal everyday thought is metaphoric, with different conceptual metaphors used to think with on different occasions or by different people.
Its central consequence is the huge range of concepts that use metaphor Sorry, a central consequence is the huge range of concepts that use metaphor cannot be defined relative to the outside world, but are instead embodied via interactions of the body and brain with the world.
This is when I've said for many years, the state is an effect of the family, that people view the state the way they view the family.
This is what he's talking about. People cannot look at the state objectively.
They cannot... Look at taxation as force or the initiation of force.
They cannot look at an absence of a social contract.
All they can do is interpret the state through the lens of the family.
And so that is why people are so passionate about it.
It's why they're so irrational about it and why they're so wed to their ideology and obscuring of coercion within the state because so often coercion within the family is obscured.
He goes on, there are consequences in virtually every area of life.
Marriage, for example, is understood in many ways as a journey, a partnership, a means for growth, a refuge, a bond, a joining together, and so on.
What counts as a difficulty in the marriage is defined by the metaphor used.
Since it is rare for spouses to have the same metaphors for their marriage, and since the metaphors are fixed in the brain but unconscious, it is not surprising that so many marriages encounter difficulties.
In politics, conservatives and progressives have ideologies defined by different metaphors.
Various concepts of morality around the world are constituted by different metaphors.
These results show the inadequacy of experimental approaches to morality in social psychology, which ignore both how conceptual metaphor constitutes moral concepts and why those metaphors arise naturally in cultures around the world.
Even mathematical concepts are understood via metaphor, depending on the branch of mathematics.
Emotions are conceptualized by metaphors that are tied to the physiology of emotion.
In set theory, numbers are sets of a certain structure.
Though conceptual metaphors have been researched extensively in the fields of cognitive linguistics and neural computation for decades, experimental psychologists have been experimentally confirming their existence by showing that, as circuitry physically in the brain, they can influence behavior in the laboratory.
The metaphors guide the experimenters, showing them what to look for.
Confirming the conceptual metaphor that the future is ahead, the past is behind, experimenters found subjects thinking about the future leaned slightly forward, while those thinking about the past leaned slightly backwards.
Subjects asked to do immoral acts and experiments tended to wash or wipe their hands afterwards, confirming the conceptual metaphor morality is purity.
Subjects moving marbles upwards tended to tell happy stories while those moving marbles downwards tended to tell sad stories confirming happy is up, sad is down.
Similar results are coming in by the dozens.
The new experimental results on embodied cognition are mostly in the realm of conceptual metaphor.
Perhaps most remarkably, there appears to be brain structures that we are born with that provide pathways ready for metaphor circuitry.
Edward Hubbard has observed that critical brain regions coordinating space and time measurement are adjacent in the brain, making it easy for the universal metaphors for understanding space in terms of time to develop, as in Christmas is coming, or we're coming up on Christmas.
Mirror neuron pathways linking brain regions coordinating vision and hand actions provide a natural pathway for the conceptual metaphor that seeing is touching, as in their eyes met.
Conceptual metaphors are natural and inevitable.
They begin to arise in childhood just by living in the everyday world.
For example, a common conceptual metaphor is events with causal effects are actions by a person.
That is why the wind blows, why storms can be vicious, and why there is religion in which the person causing those effects is called a God, or God, if there is only one capital G. The most common metaphors for God in the Western tradition is that God is a father, or a person with father-like properties, a creator, lawgiver, judge, punisher, nurturer, shepherd, and so on, and that God is the infinite, the all-knowing, all-powerful, all-good, and first cause.
These metaphors are not going to go away.
The question is whether they will continue to be taken, literally, Those who believe and promote the idea that reason is not metaphorical, that mathematics is literal, and structures the world independently of human minds, are ignoring conceptual metaphor and encouraging false literalness, which can be harmful. The science is clear.
Metaphorical thought is normal.
That should be widely recognized.
Every time you think of paying moral debts, or getting bogged down on a project, or losing time, or being at a crossroads in a relationship, you are unconsciously activating a conceptual metaphor circuit in your brain, reasoning using it, and quite possibly making decisions and living your life on the basis of your metaphors.
And that's just normal.
There's no way around it.
Metaphorical reason serves us well in everyday life, but it can do harm if you're unaware of it.
I think that's very good.
It's something I've been talking about for a long time, that people reason in terms of metaphors, and those metaphors tend to be embodied in early childhood experiences.
And this is one of the reasons I spend so much time, if people want to, talking about dreams, because dreams are the ways in which we interpret most of the moral content of our lives.
So I think it's very, very important to understand the metaphors that are at work in your own mind and in your own life, because it's the only way that we can really break through these metaphors to get through to reason and reality and empiricism on the other side.
Milford Volpoff, Professor of Anthropology, Author of Race and Human Evolution.
He likes GIGO, a shorthand abstraction I find to be particularly useful.
In my own cognitive toolkit comes from the world of computer science and applies broadly in my experience to science and scientists.
GIGO means garbage in, garbage out.
Its application in the computer world is straightforward and easy to understand, but I have found a much broader application throughout my career in paleoanthropology.
In computer work, garbage results can arise from bad data or from poorly conceived algorithms applied to analysis.
I don't expect that the results from both of these combined are a different order of garbage because bad is bad enough.
The science I am used to practicing has far too many examples of mistaken, occasionally fraudulent data and inappropriate, even illogical analyses, and it is all too often impossible to separate conclusions from assumptions.
So I like this one, of course, because if you have errors at the beginning of your philosophical or at the base of your ethical or philosophical theory, then everything that comes out will be largely garbage.
And since statism is an error of universality, it's an error of omitting universality as a requirement for your philosophy, everything that comes out of it is crap.
And, of course, the ultimate garbage in, garbage out is religion.
Roger Shanks, psychologist and computer scientist, author of Making Minds Less Well-Educated Than Our Own.
He writes about experimentation, which I think is very interesting.
He writes, some scientific concepts have been so ruined by our education system that it is necessary to explain about the ones that everyone thinks they know about when they really don't.
We learn about experimentation in school.
What we learn is that scientists conduct experiments, and if we copy exactly what they did in our high school labs, we will get the results they got.
We learn about the experiments that scientists do, usually about the physical and chemical properties of things, and we learn that they report their results in scientific journals.
So, in effect, we learn that experimentation is boring.
It's something done by scientists and has nothing to do with our daily lives.
And this is a problem.
Experimentation is something done by everyone all the time.
Babies experiment with what might be good to put in their mouths.
Toddlers experiment with various behaviors to see what they can get away with.
Teenagers experiment with sex, drugs, and rock and roll.
But because people don't really see these things as experiments, nor as ways of collecting evidence in support or refutation of hypothesis, They don't learn to think about experimentation as something they constantly do and thus will need to learn to do better.
Every time we take a prescription drug we are conducting an experiment.
But we don't carefully record the results after each dose and we don't run controls and we mix up the variables by not changing only one behavior at a time so that when we suffer from side effects we can't figure out what might have been the true cause.
We do the same thing with personal relationships.
When they go wrong we can't figure out why because the conditions are different in each one.
Now, while it is difficult, if not impossible, to conduct controlled experiments in most areas of our own lives, it is possible to come to understand that we are indeed conducting an experiment when we take a new job or try a new tactic in a game we're playing, or when we pick a school to attend, or when we try to figure out how someone is feeling, or when we wonder why we ourselves feel the way that we do.
Every aspect of life is an experiment that can be better understood if it is perceived in that way.
But because we don't recognize this, we fail to understand that we need to reason logically from the evidence we gather and that we need to carefully consider the conditions under which our experiments might have been conducted and that we need to decide when and how we might run the experiment again with better results.
In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment.
But people who don't see their actions as experiments and those who don't know how to reason carefully from data will continue to learn less well from their own experiences than those who do.
Since most of us have learned the word experiment in the context of a boring 9th grade science class, most people have long since learned to discount science and experimentation as being relevant to their lives.
If school taught basic cognitive concepts such as experimentation in the context of everyday experience, Now, I really like this, because...
I mean, there's two ways that human beings can figure out better courses of action, fundamentally two ways.
The first is you try something and it fails, right?
And then you say, well, that failed, and you try and sort of figure out why.
And the other is you reason it out ahead of time, and you try to go from there.
So to use my standard bridge building engineering metaphor, you can either build a bunch of bridges till one stands up, and then try and figure out, build while it falls down, try and figure out why, or you sort of reason it out ahead of time.
And of course it's not either or, fundamentally, but if you look at statism not as a fixed thing, but as an experiment, right, so you look at the poverty programs from the 1960s as an experiment, if you look at Social Security as an experiment, It's an argument from a fact, and it's not perfect, but it does at least lead you to look at something as not working the way it was supposed to.
As I've always said, if people wanted to solve the problem of poverty, it was being solved before the poverty programs came in.
They need to look at the poverty programs as experiments and compare the results to the results claimed and see what happens.
Same thing with Afghanistan, same thing with Iraq, same thing soon to be with Libya.
You need to look at what was claimed and what was delivered.
And if what was delivered was far worse than what was claimed, Then you need to go back and look at the claims and figure out what was wrong with them and what was not understood at the time.
And of course we would, I think, most of us would argue that what was not understood at the time was that the initiation of force is evil.
And poverty programs, drug war and all that, all based on the initiation of force and that's why they don't work.
Right? So experimentation, looking at things as experiments rather than fixed outcomes or moral absolutes, God help us.
Really, really important. Robert Provine, psychologist, neuroscientist, author, laughter.
Tan Staffel. Tan Staffel is the acronym for There Ain't No Such Thing as a Free Lunch, a universal truth having broad and deep explanatory power in science and daily life.
The expression originated from the practice of saloons offering free lunch if you buy their overpriced drinks.
Science fiction master Robert Heinlein introduced me to Tan Staffel in The Moon is a Harsh Mistress, his 1966 classic in which a character warns of the hidden cost of a free lunch.
The universality of the fact that you can't get something for nothing has found application in sciences as diverse as physics, laws of thermodynamics, and economics.
Wimbledon Friedman used a grammatically upgraded variant of the book, sorry, as a title of his 1975 book, There's No Such Thing as a Free Lunch.
Physicists are clearly on board with Tan Staffel, less so many political economists in their smoke-and-mirrors world.
My students hear a lot about Hen Staffel, from the biological costs of the peacock's tail to our nervous system that distorts physical reality to emphasize changes in time and space.
When the final tally is made, peahens cast their ballot for the sexually exquisite plumage of the peacock and its associated vigor, and it is much more adaptive for humans to detect critical sensory events than to be high-fidelity light and sound meters.
In such cases, lunch is not free, but comes at reasonable cost, as determined by the grim but honest accounting of natural selection, a process without hand-waving and incantation.
I don't think much needs to be said about that.
Thank you.
Gerald Halton, Mallinckrodt Professor of Physics and Professor of the History of Science.
Skeptical empiricism.
In politics and society, large, important decisions are all too often based on deeply held presuppositions, ideology, or dogma, or, on the other hand, on headlong pragmatism without study of long-range consequences.
Therefore, I suggest the adoption of skeptical empiricism, the kind that is exemplified by the carefully thought-out and tested research in science at its best.
It differs from plain empiricism on the sort that characterized the writings of the scientist-philosopher Ernst Mach, who refused to believe in the existence of Adams because one could not, quote, see them.
To be sure, in politics and daily life, on some topics decisions have to be made very rapidly on few or conflicting data.
Yet precisely for that reason it will be wise also to launch a more considerate program of skeptical empiricism on the same topic, if only to be better prepared for the consequences, intended or not, that followed from the quick decision.
And here we have good old Stevie Pinker, Johnson Family Professor, Department of Psychology, Harvard University, author, the stuff of thought.
Positive sum games, like this one.
A zero-sum game is an interaction in which one party's gain equals the other party's loss.
The sum of their gains and losses is zero.
More accurately, it is constant across all combinations of their courses of action.
Sports matches are quintessential examples of zero-sum games.
Winning isn't everything, it's the only thing.
And nice guys finish last.
A non-zero-sum game is an interaction in which some combination of actions provide a net gain, positive sum or loss, negative sum, to the two of them.
The trading of surpluses, as when herders and farmers exchange wool and milk for grain and fruit, is a quintessential example, as is the trading of favors, as when people take turns babysitting each other's children.
In a zero-sum game, a rational actor seeking the greatest gain for himself or herself would necessarily be seeking the maximum loss for the other actor.
In a positive-sum game, a rational, self-interested actor may benefit the other guy with the same choice that benefits himself or herself.
More colloquially, positive-sum games are called win-win situations and are captured in the cliché, everybody wins.
Once people are thrown together in an interaction, their choices don't determine whether they are in a zero or non-zero-sum game.
The game is a part of the world they live in.
But people, by neglecting some of the options on the table, may perceive that they are in a zero-sum game, when in fact they are in a non-zero-sum game.
Moreover, they can change the world to make their interactions non-zero-sum.
For these reasons, when people become consciously aware of the game-theoretic structure of their interaction, that is, whether it is positive, negative, or zero-sum, they can make choices that bring them valuable outcomes like safety, harmony, and prosperity without their having to become more virtuous, noble, or pure. Some examples.
Squabbling colleagues or relatives agree to swallow their pride, take their losses, or lump it to enjoy the resulting comity, rather than absorbing the cost of continuous bickering in the hopes of prevailing in a battle of wills.
Two parties in a negotiation split the difference in their initial bargaining positions to get to yes.
A divorcing couple realizes that they can reframe their negotiations from each other trying to get the better of each other while enriching their lawyers to trying to keep as much money for the two of them and out of the billable hours of do we cheat him and how as possible.
Populaces recognize that economic middlemen, particularly ethnic minorities who specialize in that niche, such as Jews, Armenians, overseas Chinese, and expatriate Indians, are not social parasites whose prosperity comes at the expense of their hosts, but positive-sum game creators who enrich everyone at once.
Countries recognize that international trade doesn't benefit their trading partner to their own detriment, but benefits them both, and turn away from beggar-thy-neighbor protectionism to open economies, which, as classical economists noted, make everyone richer, and which, as political scientists have recently shown, discourage war and genocide.
Actually, it's not quite true.
It doesn't make everyone richer. Protectionism makes some people poorer.
Anyway, warring countries lay down their arms and split the peace dividend rather than pursuing pyrrhic victories.
Now, granted, some human interactions really are zero-sum.
Competition for mates is a biologically salient example.
And even in positive-sum games, a party may pursue an individual advantage at the expense of joint welfare.
But a full realization of the risks and costs of the game-theoretic structure of an interaction, particularly if it is repeated, so that the temptation to pursue an advantage in one round may be penalized when roles reverse in the next, can Militate?
I think it's michiate. Militate, he says, against various forms of short-sighted exploitation.
I think mitigate the various forms of short-sighted exploitation.
Has an increasing awareness of the zero or non-zero-sumness of interactions in the decades since 1950 actually led to increased peace and prosperity in the world?
It's not implausible.
International trade and membership in international organizations has soared in the decades.
That game-theoretic thinking has infiltrated popular discourse.
And perhaps not coincidentally, the developed world has seen both spectacular economic growth and a historically unprecedented decline in several forms of institutionalized violence, such as wars between great powers, wars between wealthy states, genocides, and deadly ethnic riots.
Since the 1990s, these gifts have started to accrue to the developing world as well, in part because they have switched their foundational ideologies from ones that glorify zero-sum class and national struggle To ones that glorify positive sum market cooperation.
All these claims can be documented from the literature in international studies.
The enriching and pacifying effects of participation in positive-sum games long antedate the contemporary awareness of the concept.
The biologists John Maynard Smith and Aeorus Sethmati have argued that an evolutionary dynamic which creates positive-sum games drove the major transitions in the history of life.
the emergence of genes, chromosomes, bacteria, cells with nuclei, organisms, sexually reproducing organisms, and animal societies.
In each transition, biological agents entered into larger holes, in which they specialized, exchanged benefits, and developed safeguards to prevent one from exploiting the rest to the detriment of the whole.
An explicit recognition among literate people of the shorthand abstraction positive sum game and its relatives may be extending a process in the world of human choice that has been operating in the natural order for billions, billions, I tell you, of years.
Well, that's great.
Soon, soon, not holding your breath soon, but some point in the next generation or two, everybody will understand that taxation is a zero-sum game.
In fact, it's a negative-sum game.
It's negative-sum in the long run.
It's win-lose in the short run.
Dylan Evans, lecturer in behavioral science, he writes about the law of comparative advantage.
It is not hard to identify the discipline in which to look for the scientific concepts that would most improve everybody's cognitive toolkit.
It has to be economics.
No other field of study contains so many ideas ignored by so many people at such great costs to themselves and the world.
The hard task is picking just one of the many such ideas that economists have developed.
On reflection, I plumbed for the Law of Comparative Advantage, which explains how trade can be beneficial for both parties, even when one of them is more productive than the other in every way.
At a time of growing protectionism, it is more important than ever to reassert the value of free trade.
Since trade and labor is roughly the same as trade and goods, the Law of Comparative Advantage also explains why immigration is almost always a good thing, a point which also needs emphasizing at a time when xenophobia is on the rise.
In the face of well-meaning but ultimately misguided opposition to globalization, we must celebrate the remarkable benefits which international trade has brought us and fight for a more integrated world.
Just so people know, the law of comparative advantage is that even if I'm better at you than everything, there are some things that I'm better at more than others.
Let's say I'm a faster typist than you, but I'm Brad Pitt who can make a movie if he's not typing.
So even though I'm a faster typist than you, it makes sense for me to farm out my typing to you, even though I could do it faster, because it's better for me to make movies than type faster.
So that's a law of comparative advantage, and it's a good choice, I think.
And in the continuing sequence of people who agree with me, even though they don't know it, Douglas T. Kenrick, professor of psychology, Arizona State University.
He's the editor of Evolution and Social Psychology.
Mycosystem Moment, he writes about sub-selves and the modular mind.
Although it seems obvious that there is a single you inside your head, research from several sub-disciplines of psychology suggests that this is an illusion.
The you who makes a seemingly rational and self-interested decision to discontinue a relationship with a friend who fails to return your phone calls, borrows thousands of dollars he doesn't pay back and lets you pick up the tab in the restaurant, is not the same you who makes very different calculations about a son, about a lover, or about a business partner.
Three decades ago, cognitive scientist Colin Martindale advanced the idea that each of us has several sub-selves, and he connected his idea to emerging ideas in cognitive science.
Central to Martindale's thesis were a few fairly simple ideas, such as selective attention, lateral inhibition, state-dependent memory, and cognitive dissociation.
Although all the neurons in our brains are firing all the time, we'd never be able to put one foot in front of the other if we weren't able to consciously ignore most of all of that hyper-abundant parallel processing going on in the background.
When you walk down the street, there are thousands of stimuli to stimulate your already overtaxed brain.
Hundreds of different people of different ages, with different accents, different hair colors, clothes, different ways of walking and gesturing, not to mention all the flashing advertisements, curbs to avoid tripping over, and automobiles running yellow lights as you try to cross the intersection.
Hence, attention is highly selective.
The nervous system accomplishes some of that selectiveness by relying on the powerful principle of lateral inhibition, in which one group of neurons suppresses the activity of other neurons that might interfere with an important message getting up to the next level of processing.
In the eye, lateral inhibition helps us notice potentially dangerous holes in the ground, as the retinal cells stimulated by light areas send messages suppressing the activity of neighboring neurons, producing a perceived bump in the brightness and valley of darkness near any edge.
Several of these local edge detector-style mechanisms combine at a higher level to produce shape detectors, enabling us to discriminate a B from a D and a P. Higher up in the nervous system, several shape detectors combine to allow us to discriminate words.
And at a higher level, to discriminate sentences, and at a still higher level, place those sentences in context, thereby discriminating whether the statement, Hi, how are you today?
is a romantic pass or a prelude to a sales pitch.
State-dependent memory helps sort out all that incoming information for later use by categorizing new information according to context.
If you learn a stranger's name after drinking a doppio espresso at the local Java house, it will be easier to remember that name if you meet again at a Starbucks than if the next encounter is at a local pub after a martini.
For several months after I returned from Italy, I would start speaking Italian and making expansive hand gestures every time I drank a glass of wine.
At the highest level, Martindale argued that all of those processes of inhibition and dissociation lead us to suffer from an everyday version of dissociative disorder.
In other words, we all have a number of executive sub-selves and the only way we manage to accomplish anything in life is to allow only one sub-self to take the conscious driver's seat at any given time.
Martindale developed his notion of executive sub-selves before modern evolutionary approaches to psychology had become prominent.
But the idea becomes especially powerful if you combine Martindale's cognitive model with the idea of functional modularity.
Building on findings that animals and humans use multiple and very different mental processes to learn different things, evolutionarily informed psychologists have suggested that there is not a single information processing organ inside our heads.
But instead, multiple systems dedicated to solving different adaptive problems.
Thus, instead of having a random and idiosyncratic assortment of sub-selves in my head, different from the assortment inside your head, each of us has a set of functional sub-selves.
One dedicated to getting along with our friends, one dedicated to self-protection, protecting us from the bad guys, one dedicated to winning status, another to finding mates, a distinct one for keeping mates, which is a very different set of problems as some of us have learned, and yet another to caring for our offspring.
Thinking of the mind as composed of several functionally independent adaptive sub-selves helps us understand many seeming inconsistencies and irrationalities in human behavior, such as why a decision that seems rational but involves one's son seems eminently irrational but involves a friend or a lover, for example. Well, enough about that.
I think that's a fairly good validation of stuff we've been working on for the last couple of years.
Anyway, you can get more of these.
I'll just read one more. I'll put the link in the description to the podcast and under a video if I do a video.
She's been on the show, Alison Gopnik, psychologist, UC Berkeley, author of The Philosophical Baby.
The rational unconscious. One of the greatest scientific insights of the 20th century was that most psychological processes are not conscious.
But the unconscious that made it into the popular imagination was Freud's irrational unconscious.
The unconscious is a rolling, passionate id barely held in check by conscious reason and reflection.
This picture is still widespread even though Freud has been largely discredited scientifically.
The unconscious that has actually led to the greatest scientific and technological advances might be called Turing's rational unconscious.
If the vision of the unconscious you see in movies like Inception were scientifically accurate, it would include phalanxes of nerds with slide rules instead of women in negligees wielding revolvers amid Delhi-esque landscapes.
At least, that might lead the audience to develop a more useful view of the mind, if not, admittedly, to buy more tickets.
Early thinkers like Locke and Hume anticipated many of the discoveries of psychological science, but thought that the fundamental building blocks of the mind were conscious ideas.
Alan Turing, the father of the modern computer, began by thinking about the highly conscious and deliberate step-by-step calculations performed by human computers.
Like the women decoding German ciphers at Bletchley Park.
That's a reference to the Second World War in England.
She goes on, his first great insight was that the same processes could be instantiated in an entirely unconscious machine, but the same results.
A machine could rationally decode the German ciphers using the same steps that the conscious computers went through.
And the unconscious relay and vacuum tube computers could get to the right answers in the same way that the flesh and blood ones could.
Turing's second great insight was that we could understand much of the human mind and brain as an unconscious computer, too.
The women at Bletchley Park brilliantly performed conscious computations in their day jobs, but they were unconsciously performing equally powerful and accurate computations every time they spoke a word or looked across the room.
Discovering the hidden messages about three-dimensional objects in the confusing mass of retinal images is just as difficult and important as discovering the hidden messages about submarines in the incomprehensible Nazi telegrams, and the mind turns out to solve both mysteries in a similar way.
More recently, cognitive scientists have added the idea of probability into the mix so that we can describe an unconscious mind and design a computer that can perform feats of inductive as well as deductive inference.
Using this sort of probabilistic logic, a system can accurately learn about the world in a gradual probabilistic way, raising the probability of some hypotheses and lowering that of others, and revising hypotheses in the light of new evidence.
This work relies on a kind of reverse engineering.
First, work out how any rational system could best infer the truth from the evidence it has.
Often enough, it will turn out that the unconscious human mind does just that.
Some of the greatest advances in cognitive science have been the result of this strategy, but they've been largely invisible in popular culture, which has been understandably preoccupied with the sex and violence of much evolutionary psychology.
Like Freud, it makes for a better movie.
Vision science studies how we are able to transform the chaos of stimulation at our retinas into a coherent and accurate perception of the outside world.
It is, arguably, the most scientifically successful branch of both cognitive science and neuroscience.
It takes off from the idea that our visual system is, entirely unconsciously, making rational inferences from retinal data to figure out what objects are like.
Vision scientists began by figuring out the best way to solve the problem of vision and then discovered in detail just how the brain performs these computations.
The idea of the rational unconscious has also transformed our scientific understanding of creatures who have traditionally been denied rationality, such as young children and animals.
It should transform our everyday understanding, too.
The Freudian picture identifies infants with that fantasizing irrational unconscious, and even on the classic Piagetan view.
Piaget, he was a psychologist, I think, in France in the 1940s, 1950s, who obsessively recorded his three children, I think.
So Piaget's view, young children are profoundly illogical, but contemporary research shows the enormous gap between what young children say and presumably what they experience, and they're spectacularly accurate if unconscious feats of learning, induction, and reasoning.
The rational unconscious gives us a way of understanding how babies can learn so much when they consciously seem to understand so little.
Oh, that's true.
I mean, I can completely negotiate with Isabella now.
She's just turned 27 months.
Another way the rational unconscious could inform everyday thinking is by acting as a bridge between conscious experience and the few pounds of grey goo in our skulls.
The gap between our experience and our brains is so great that people ping-pong between amazement and incredulity at every study that shows that knowledge or love or goodness is really in the brain, though where else could it be?
There is important work linking the rational unconscious to both conscious experience and neurology.
Intuitively, we feel that we know our own minds, that our conscious experience is a direct reflection of what goes on underneath.
But much of the most interesting work in social and cognitive psychology demonstrates the gulf between our rationally unconscious minds and our conscious experience.
Our conscious understanding of probability, for example, is truly awful, in spite of the fact that we unconsciously make subtle probabilistic judgments all the time.
The scientific study of consciousness has made us realize just how complex, unpredictable and subtle the relation is between our minds and our experience.
At the same time, to be genuinely explanatory, neuroscience has to go beyond the new phrenology of simply locating psychological functions in particular brain regions.
The rational unconscious lets us understand the how and why of the brain, and not just the where.
Again, vision science has led the way, with elegant empirical studies showing just how specific networks of neurons can act as computers rationally solving the problem of vision.
Of course, the rational unconscious has its limits.
Visual illusions demonstrate that our brilliantly accurate visual system does sometimes get it wrong.
Conscious reflection may be misleading sometimes, but it can also provide cognitive prosthesis, the intellectual equivalent of glasses with corrective lenses, to help compensate for the limitations of the rational unconscious.
The institutions of science do just that.
The greatest advantage of understanding the rational unconscious would be to demonstrate that rational discovery isn't a specialized, abstruse privilege of the few we call scientists, but is instead the evolutionary birthright of us all.
Really tapping into our inner vision and inner child might not make us happier or more well-adjusted, but it might make us appreciate just how smart we really are.
That's a great insight from Dr.
Gopnik. And I really like how she talks about if we're rational.
If our unconscious is rational and children are largely unconscious, then dealing with children as rational agents is itself rational, which is exactly what I've argued for and I think lots of other people have argued for for many years.
Now, I've read most of these, but due to time constraints, I can't read them all.
It is 115,000 words, which is quite a bit.
And the one thing I haven't seen is, and again, the link's below, the one thing I haven't seen is universality.
And that, to me, is really tragic, but it's inevitable.
Universality is the most volatile, the most volatile Aspect of philosophy and of thinking, because it tears down things like statism or religion and abusive behaviors.
So universality is something that most people shy away from.
It is such a third rail.
You touch it and, well, you don't die, but it can get a bit uncomfortable sometimes.
So I think that's one thing that's quite missing, and that is the foundation of science, foundation of philosophy, of mathematics, engineering, universality.
And it's nowhere mentioned, but it is the most powerful thing.
And of course, because it's not mentioned, we have to talk about it even more.
All right, we'll just do one more from Don Tapscott, founder of Moxie Insight, a young professor at Robin School of Management.
Oh, right. Designing your mind.
Given recent research about brain plasticity and the dangers of cognitive load, the most powerful tool in our cognitive arsenal may well be design.
Specifically, we can use design principles and discipline to shape our minds.
This is different than learning and acquiring knowledge.
It's about designing how each of us thinks, remembers, and communicates appropriately and effectively for the digital age.
Today's popular hand-wringing about its effects on cognition has some merit, but rather than predicting a dire future, perhaps we should be trying to achieve a new one.
New neuroscience discoveries give hope.
We know that brains are malleable and can change depending on how they are used.
The well-known study of London taxi drivers showed that a certain region in the brain involved in memory reformation was physically larger than in non-taxi-driving individuals of a similar age.
This effect did not extend to London bus drivers, supporting the conclusion that the requirement of London's taxi drivers to memorize the multitude of London streets drove structural brain changes in the hippocampus.
Results from studies like these support the notion that even among adults, the persistent, concentrated use of one neighborhood of the brain really can increase its size and presumably also its capacity.
Not only does intense use change adult brain regional structure and function, but temporary training and perhaps even mere mental rehearsal seems to have an effect as well.
A series of studies showing that one can improve tactile braille character discrimination among seeing people who are temporarily blindfolded showed that.
Brain scans revealed that participants' visual cortex responsiveness was heightened to auditory and tactile sensory input after only five days of blindfolding for over an hour each time.
The existence of lifelong neuroplasticity is no longer in doubt.
The brain runs on a use-it-or-lose-it motto.
So, could we use it to build it right?
Why don't we use the demands of our information-rich, multi-stimuli, fast-paced, multitasking digital existence to expand our cognitive capability?
Psychiatrist Dr. Stan Kutcher, an expert on adolescent brain mental health, who has studied in effect the effect of digital technology on brain development, says we probably can.
Quote, there is emerging evidence suggesting that exposure to new technologies may push the net generation, teenagers and young adults, brain past conventional capacity limitations.
When the straight-A student is doing her homework at the same time as five other things online, she is not actually multitasking.
Instead, she has developed a better active working memory and better switching abilities.
I can't read my email and listen to iTunes at the same time, but she can.
Her brain has been wired to handle the demands of the digital age.
How could we use design thinking to change the way we think?
Good design typically begins with some principles and functional objectives.
You might aspire to have a strong capacity to perceive and absorb information effectively, concentrate, remember, inform meaning, be creative, write, speak and communicate well, and to enjoy important collaborations and human relationships.
How could you design your use or absence of media To achieve these goals.
Something as old school as a speed reading course could increase your input capacity without undermining comprehension.
If it made sense in Evelyn Wood's day, it is doubly important now, and we've learned a lot since her day about how to read effectively.
Feeling distracted? The simple discipline of reading a few dull articles per day rather than just the headlines and summaries could strengthen attention.
Want to be a surgeon? Become a gamer or rehearse while on the subway.
Rehearsal can produce changes in the motor cortex as big as those induced by physical movement.
One study. In one study, a group of participants were asked to play a simple five-finger exercise on the piano while another group of participants were asked to think about playing the same quote song in their heads using the same finger movements one note at a time.
Both groups showed a change in their motor cortex with differences among the group who mentally rehearsed the song as great as those who physically played the piano.
Losing retention? Decide how far you want to adopt Alfred Einstein's law of memory.
When asked why he went to the phone book to get his number, he replied that he only memorizes things he can't look up.
There is a lot to remember these days.
Between the dawn of civilization and 2003, there were five exabytes of data collected.
An exabyte equals one quintillion bytes.
Today, five exabytes of data gets collected every two days.
Soon there will be five exabytes every few minutes.
Humans have a finite memory capacity.
Can you develop criteria for which will be inboard and outboard?
Or want to strengthen your working memory and capability to multitask?
Try reverse mentoring, learning with your teenager.
This is the first time in history when children are authorities about something important, and the successful ones are pioneers of a new paradigm in thinking.
Extensive research shows that people can improve cognitive function and brain efficiency through simple lifestyle changes, such as incorporating memory exercises into their daily routine.
Why don't schools and universities teach design thinking for thinking?
We teach physical fitness, but rather than brain fitness we emphasize cramming young heads with information and testing their recall.
Why not courses that emphasize designing a great brain?
Does this modest proposal raise the spectrum of designer minds?
I don't think so. The design industry is something done to us.
I'm proposing we each become designers.
But I suppose, I love the way she thinks, could take on new meaning.
Well, that's great. And of course, philosophy is fantastic, fantastic, fantastic for that.
In terms of keeping you young, keeping your brain flexible, getting stuck in ideology is like a train going around a track spiral to its eventual stop.
And that, I think, is...
It's really, really important. So anyway, these are, again, edge.org.
You can get more of these. I think they're well worth a look through.
And if anybody finds universality that I missed, please let me know.
Export Selection