All Episodes
July 16, 2018 - The Unexplained - Howard Hughes
59:41
Edition 354 - Charles Ostman

"The Historian of the Future" - Charles Ostman - talks with The Unexplained...

| Copy link to current segment

Time Text
Across the UK, across continental North America, and around the world on the internet, by webcast and by podcast.
My name is Howard Hughes, and this is the Return of the Unexplained.
Thank you very much for all of your emails, the guest suggestions, and other things that you've sent through.
Remember, if you have a view about the show, if you want to tell me what you think about the guests, then you can always go to my website, theunexplained.tv, and you can send me your thoughts there.
They're always gratefully received.
And the website, of course, maintained and honed and developed by Adam Cornwell from Creative Hotspot in Liverpool.
Thank you, Adam, for your work on the site and the show for all of these years.
This edition, we're going to talk with somebody very special, somebody who goes back with me to the days of listening to Art Bell, when I first discovered him in around about 1997 or thereabouts.
And I remember thinking then, I'd love to get this man on my show.
And now, all of those years later, and sadly, Art Bell has left us, we have this man on the show.
His name is Charles Ostman.
And I think if anybody is uniquely equipped to help us understand and come to terms with our technological future, which is approaching us at incredible speed, then Charles Ostman is.
And certainly on the basis of all the conversations that I've been listening to with him, done by various hosts in various places, he is a man of a vast wealth of experience.
A little bit about him from his biography.
He's a senior fellow at the Institute for Global Futures and serves with the management team of Forth Venture.
He's been featured as guest and speaker in a diverse variety of nationally broadcast TV and radio shows, 25 years plus experience in electronics, materials science, computing, and AI, including eight years at Lawrence Berkeley Laboratory at the University of California, Berkeley.
And also, this man was at Los Alamos National Laboratory.
So his credentials in science and technology are, I would say, unimpeachable.
And he is somebody who is willing to tackle these core questions.
He is described, I think, on his own website, as the historian of the future, which is a great title for him.
And certainly, based on all those conversations with Art Bell, he certainly seems to be that.
So there are many key questions for this man.
Where do we stand vis-a-vis the race to improve our technological future?
In other words, where are we versus the technology?
Are we about to be assimilated by, overtaken by, become part of technology?
Is it our friend or is it something else in this world of ours?
We've all seen, of course, Sophia, the female robot who answers questions, get one or two things correct and one or two things not so correct in various demonstrations.
But this technology and other forms of technology are advancing at an enormously rapid rate.
And many people, and included among those politicians, the people who control our destinies, simply don't understand what is happening because it's happening too fast.
Are we going to be equipped to deal with it?
Well, there are many questions for Charles Osman, so let's get to him now in California.
It is 5 p.m. as I record this in the United Kingdom, 9 a.m. on the California morning.
And Charles, thank you very much for making time for me.
Glad to be here.
So, Charles, it is an irony.
We were hoping to do this by digital connection, and we had some problems with it.
And we're both people of technology, so now we've ended up on the good old-fashioned telephone.
What can we say?
It's a perfect synopsis for understanding how fragile this tech world is, in a sense that as complexity goes up, so does fragility.
True.
Look, there seem to me to be two aspects of this.
We are becoming more and more dependent on technology.
We don't quite, I think a lot of us understand where it is taking us, but we have a blind faith in it that we perhaps shouldn't have.
But when technology lets us down, we are left like babes in a cot.
Well, you know, it's kind of fascinating because I recall quite some time ago, back in my school days, calculators are just becoming common.
And people had a tremendous sense that if everybody gets addicted to using calculators, they'll actually lose personal, which I call organic math skills.
Okay, fine.
Well, now we've progressed many decades later.
It's kind of background noise.
But in a kind of a way, it was a metaphor for this idea.
In other words, I see a scenario not that far off in the future at all where people begin to outsource their existence to various types of AI systems.
Virtual personal systems that sort of take you through the complexities of daily life.
AI systems that walk you through various aspects in the business world.
I mean, right now, as a matter of fact, the number one target market for AI applications is not so much in robots or in machines or in things that you talk to in your phone.
It's actually aimed at white-collar management.
That is that.
Charles, can I just stop you for a second?
Are you using a speakerphone?
Are you using a handset that you're holding or a speakerphone?
I'm on a handset.
Okay.
That's good.
Well, be as close to your handset as you can be without it being uncomfortable.
So you were saying the number one market for AI is?
It's in white-collar management.
That is, most people tend to think of AI from a sort of consumer perspective as being something they would see or touch.
A voice you would hear in your phone or a smart machine or something like you might maybe in a factory or something like this, you know, robotic device.
Actually, that's not the major market.
The major market is the AI that you can't see.
And in that context, the number one target for right now is white-collar management.
That is mid-level management in most corporate structures.
And the point being that, in fact, the two primary markets at the moment are medical and legal, at least here in the U.S. But what I see coming around the corner, and I think what a lot of people see, is that the scale and complexity of decision rendering compressed into ever shorter time scales requires a type of decision rendering enhancement that simply wasn't relevant perhaps in an earlier time.
So we're starting to push towards a realm where we have to rely on some form of intelligence or combinations thereof that weren't available in earlier times because we have to deal with the process dynamics of current times.
What a fascinating thought.
What you're Saying is, isn't it, that decisions are having to be made of such complexity and at such a pace that human beings can no longer cope.
That's correct.
And it becomes a competitive aspect.
In other words, certainly in the private sector, if I'm a corporate entity, whatever the market might happen to be, and my process dynamics are limited by the clumsiness, let's say, or the psychological noise of traditional human interaction management, which are fraught with all sorts of internal drama and different sort of aspects of the human condition, that clutters up the process.
In today's world, to be competitive, whoever is faster, more accurate, has a broader range of access to all the different data mechanisms that are driving their decisions.
They're going to be more competitive.
And so it just becomes obvious that the best way to solve that problem, quote unquote, is to take out the error factor that the human condition can introduce into the process.
But that's not to say, is it, that the errors cannot be induced by the way that the artificial intelligence is programmed.
If you program badly, you are still going to get errors.
You might get bigger errors.
Well, and so this is where it becomes kind of interesting because define error.
And I'm not being cleaner.
I'm just saying define error.
So in other words, what would qualify as an error in the eyes of some humans or perhaps in the eyes of a more traditional kind of logic format may or may not be an error in the context of does it solve the problem more correctly?
Does it serve the larger mission in a more robust way, et cetera, et cetera.
And also it might be worthy to note that AI in today's world is really different than what it was quite some time ago.
There's a term that's often used called strong AI.
And what that means is in an earlier time you had weak AI.
Weak AI was really just sort of a trick in a way.
It was a way to create bots or other kind of behavioral mechanisms that could carry on a conversation or carry on what looked like somewhat sophisticated decision rendering under various different conditional sets.
But the point was it was just a trick.
It was a program.
It had a series of if-thens or sort of conditional operands that would respond to different kinds of stimuli.
In today's world, strong AI is quite different.
It's evolutionary.
It adapts.
It changes.
It does a process morphology associated with the internals of how the mechanism works.
And there are different kinds of AI, of course, obviously.
But one also has to think of the idea that you have collections of different types of AIs that collectively work together to form a more robust, like an organism.
It's very much like you have organelles in a cell or specialized organs in a body to make a larger living organism.
You have different types of AI components that collectively fit together to make a kind of a living thing in a way, depending upon how it's being applied.
So in this context, the AI systems will see things that the humans don't see.
They'll see trends.
They'll extract very subtle variations in different processes which are going on in parallel, which are far outside the range of the average human's ability to observe, and more importantly, are not clouded by other conditions that normal humans might otherwise affect their judgment by.
So the whole idea is, is it making a mistake, yes or no, really is a different kind of a question.
It's no longer relevant, perhaps, to extrude that sort of question through the mandrel of typical anthropomorphic considerations.
Okay, so I understand what you're saying, that we're getting a better class of AI here.
We're getting linked machinery, linked electronics that can make a higher quality of decision.
But the one thing that human beings have is the power for discretion.
Sometimes compassion filters into that discretion.
Can you really tell me that some intelligent electronic system will be able to exercise that kind of thing?
And what I often answer when I ask this question, which happens pretty often, actually, this is becoming a more common theme, as it were, is that AIs learn from what they see.
In other words, if we have a series, let's say we have a real-time interconnected matrix of AI nodes of different types that collectively function as a sort of a hive mentality or a hive system, and we want to query whether or not it's going to use something that resembles human values in its assessment of certain mission-critical decisions, have a sense of ethics, that sort of thing.
And my usual response is, if it's a type of AI that learns from its surroundings, and that's what most of the modern AI is, then it will adapt from what it sees.
And if it sees lots of examples of really bad behavior, the darker side of the human condition, then that's what it learns to adapt to and deploy as its own response mechanism.
I'll give you an example.
There was an MIT experiment that just happened, just came out a couple weeks ago.
They made a psychotic AI on purpose.
They took two identical AIs, as a matter of fact, and they exposed one of the AIs to pleasant things, beautiful images, beautiful music.
When they were confronted with various decisions, it would always respond to a more positive way of looking at the outcome, as it were.
And then they had the other AI, which they showed contiguous examples of horrible scenes of battles and murders and people being tortured, just the worst possible things you can imagine.
Then, after they went through that process for a while, they exposed the AIs to brochure charts that would be essentially paint blobs on a paper, standard psychology test, where the person is supposed to look at a paint blob or a neat blob and then decide what it means.
The positive AI kept coming back with these rather pleasant, oh, that's a flower, or that's a cloud, or there's a picture of a tree on a hillside.
The negative AI kept coming back with the, that's somebody being stabbed, that's somebody's, you know, arm being cut off, or just one of these really strange responses.
And so they clearly showed that the exact same AI that was constructed with the exact same code, that was designed to have the exact same potential response mechanisms, could be very strongly influenced by its surroundings.
In fact, it didn't take that much time for that kind of influence to become manifest and construct a mentality that was, in a sense, hardwired into the evolutionary path of that AI.
Now, this then assumes that the people who are going to be programming the AI will be well-motivated.
If we had a situation where somebody was not well-motivated and decided to make the AI algorithm work in an unfair fashion, then as you say, it's monkey-see-monkey do.
It will perform in that way.
It won't be rational.
It won't be fair.
Well, and this is where the military application sort of question mark gets into picture, because obviously, if we're going to be deploying smart machinery, autonomous systems, swarm systems, et cetera, into the military arena, will any of this be limited by even the remotest possible sense of a human ethic-oriented way of looking at the circumstance?
Probably not.
Now, could this also be played into the realm of AI that's designed to cause harm in a non-physical way, but designed to cause harm to a particular nation-state, or I should say, just a particular population-based.
The whole idea of electronic warfare, I think we're still only seeing the beginning edge of it, despite all the current scenarios that involve enormous hacking debacles and millions and millions of people's worth of data being stolen, that sort of thing.
I think we're still only seeing the beginning edge of this.
I think the whole idea of sort of the electronic warfare realm becoming a very prevalent aspect of strategic considerations in the future, could the sort of harsher version of an AI mechanism be deployed in that context?
Yes, of course it will.
That's exactly what's going on.
To which it'll then require, not as an option, but as a requirement, having an AI response of a similar level of capacity, but motivated in a different way to counteract those negative sort of AI attacks, if you will.
So the future, and you're saying this is here now, is that nation states who have issues with each other will simply have a kind of war or conflict or dispute by proxy.
They will task their AI to go for your AI.
Exactly.
Yeah, exactly.
And so who wins in a scenario like that then?
If these things are evolving at a great rate of speed, if everybody has much the same technology, there can be no winners.
Well, that would be a very interesting way of looking at it.
Now, I tend to agree with you that I think this is a very dark path.
I might submit the following idea.
This may be a bit outside the purview of this conversation, but I'll just offer as a thought.
I tend to believe that life is very common throughout the universe.
We're not unique.
That there's been many other worlds out there sprinkling around our galaxy, many other galaxies, that have probably gone through something sort of like this.
A life form in question evolved to a certain point where it became organized, became more functional.
It developed certain levels of complexity at certain increments in its sort of growth.
It began to expand its population bases, and then it became more industrious.
It got to a certain threshold.
All of a sudden, various technologies, almost like a vertical spike of development, went from a grarian type of society to industrial society, and then it went to sort of a tech society, et cetera, much like what we're seeing here.
And then it gets to a certain point where exactly what we're witnessing now begins to take place.
They begin to rely on intelligences or life form-like things other than themselves to carry on their next level of development in that world.
Almost like an evolutionary test.
I tend to believe that evolution is not survival of the fittest.
It's really a function of more like survival of the most adaptive.
That being that if the entity in question can adapt more successfully to the new circumstance, it becomes more robust.
It becomes better at what it does.
If it can't adapt, it perishes and something else takes its place.
And so at a sort of an odd way, we're almost looking at a kind of a co-evolutionary symbiosis where for the first time ever, for which there's no previous precedent, we are now stepping into a territory where we're beginning to co-evolve with life forms that are not truly organic in nature, but really are part of our existence.
This is a kind of a co-evolutionary threshold.
Now, whether we successfully adapt accordingly and proceed to the next rung on the ladder, as it were, that could be an open question mark.
But I do see this as being the natural order of things.
I know that'll sound strange to some, but to me, this is the way things should progress or have progressed in a common way.
Whether we survive this next sort of evolutionary test, that could be a separate question mark.
But I think this is an inevitable aspect of our development.
You talk about co-evolution.
Those of us who've watched Star Trek over the years think rather frightenedly of assimilation.
What you're talking about sounds an awful lot like the human race being assimilated by technology, and you've got to get with the program because that's the only game in town.
Well, I wouldn't necessarily say we're heading towards the Borg, as it were.
Resistance is futile.
I mean, I have a lot of friends, by the way, who are very much in that camp, that they completely are terrified by this idea, and they are trying their very best to sort of circumnavigate this.
And I think for a while, that's what you're going to see.
You're going to see quite a range of people.
And by the way, you know, the famous science fiction writer Gibson, William Gibson, who once, this was back in the mid-90s, he suggested the idea that the future is already here, just isn't evenly distributed.
And I would tend to agree with that, but I think we're seeing that exact same process now.
In fact, even more so, I might further offer the idea that if there's going to be a sort of a social stratification, if you will, a caste system, I'm sort of using these words somewhat carefully, that is determining who the landed gentry are, who the elite of the society become, et cetera, et cetera, it's my opinion that those who have the higher degree of access are the ones that have the highest value in that context.
In other words, there will be different layers of technology access.
It won't necessarily be evenly distributed.
And for those that wish to be more competitive, sort of at the superior end of this ladder, if you will, it'll be whoever has the more robust access to this kind of resource.
And who determines the access?
Well, that's where it becomes an open question mark.
But sort of a side note, if I may, it's not just artificial intelligence from a real-time operational perspective.
I also see a different avenue in this.
If one wants to look at actual Biology, that is, you know, us living things.
The other sort of major piece of this puzzle, in my opinion at least, is in genetic engineering.
And one might say, well, why would you ask that?
Why would you put that into this mix?
Well, here's the reasoning.
If you look at the human genome, how it's decoded, how that information is now used to begin to modify the human genome, and many other organisms as well.
I mean, the whole idea of genetics as an applied process has become almost like background noise at this point.
It's pretty much standard fare.
It's just a matter of resolution.
What's coming very quickly, of course, is the ability to pre-design or pre-determine certain characteristics, not just in various organisms, but in us humans as well.
In other words, are we heading into an era where, for the select few or for those who can afford it, don't put it that way, they have access to genetic modifications that make you different and make you special, have enhanced skills, et cetera, et cetera?
Well, the process that makes that possible is computing.
In other words, in order to further understand or anticipate or model the characteristics that one's looking for in these kinds of genetic modifications, the top, the resource that is most important is not chemistry or things of this nature, it's the computing.
And what's interesting is that if you go backwards in time a little bit, there was an era where what was called ensilical biology was the standard mechanism for being able to model complex physiological systems.
But the computational resource required to deploy ensilical biology in a meaningful way, it was really pushing the outer fringe, if you will, of computing resource potential.
People look at supercomputers today that can perform teraflops of operations and do these incredibly complex calculations.
Among the top of the list of applications where this is used is in biology.
It's in silicobiology.
And the ability for the major companies who are in this business, their primary resource, their intellectual property, which is most coveted, is not so much in physical chemistry once again.
It's in the algorithms.
It's in the computing resources deployed to create the potential of what these modifications might become.
Okay, so now we step into a territory where artificial intelligence begins to play a significant role in this process.
We're looking for ways to, in a sense, automate the process of detecting and determining which types of variations are going to have the best response given certain characteristics that we're looking for.
The companies or the entities that have the best computing resource potential are going to push ahead much quicker than anyone else that's going to be competitive in that arena.
It's into this arena, by the way, where quantum computing suddenly becomes relevant.
In other words, if you have a computing resource that is so vastly more powerful or more effective than computing, than other forms of computing had been in the past, and then you apply that to in silicobiology and genetic engineering, now all of a sudden you have a resource that's virtually priceless.
And the outcome of which is to create biological things that would be us or other organisms which have been designed to have special characteristics.
And the resource that made that possible was the combination of various forms of AI combined with quantum computing and utilized as a resource to invent and create this sort of enhanced form of evolution.
In other words, we're speeding up evolution.
We're no longer waiting around for many millennia to pass by for very subtle mutations and mutational changes to affect who or what we might become.
I detect, Charles, a certain amount of relish in your voice when you talk about these things.
I have to say, I'm somewhat fearful because it sounds to me as if this particular genie is well and truly out of the bottle.
And from what you say, it sounds as if it's remarkably close.
And we will have a chance to maybe genetically engineer a master race of people who can afford, perhaps, or have power to get this technology.
And the rest of us, we're in for a very difficult time.
And I'm not saying this with glee at all.
Believe me, I'm not supporting this.
I'm not broadcasting anything to be for or against.
I'm just trying to, in my best ability, paint a picture of what I see as a potential outcome of the current circumstance as it has already become.
And why do you think so few people are talking about this?
Because I agree with you.
I think this thing is coming down the track.
There are people who are working on this.
But it's not really being debated.
And I think one of the reasons is because there's such a complex inter, what can I say, is there's different layers of technology that have to be understood.
And it's not just the technologies by themselves, but rather how they're applied, how they're collectively applied as a process.
And to explain this to the everyday person who, I mean, look, most people are very preoccupied.
They're raising their kids, they're working on a job, they're trying to pay off the mortgage.
Their primary focus in life is, you know, can they afford the college for their kids?
I mean, I get that.
And believe me, I'm very much mindful of this.
So for many people, they don't have the time or the focus to devote themselves to trying to understand these kinds of issues and then hopefully come up with a vision of what maybe should be or what kind of policy directors might be applied towards regulating any of this at all, if it's even possible.
And yet, we're stepping into it, once again, an area of complexity that is far beyond the range of much of the human population.
And the end result is we're going to be relying on politicians, political people, maybe people with a certain legal background, to determine what or what may not be applied in some near future realm.
This is where the real challenge actually is.
Well, you think there's a problem, though, because if you think about politicians, if you remember the politicians who questioned Mr. Zuckerberg recently, both sides of the Atlantic, some of those politicians patently did not have even the most basic grasp of the stuff that Mr. Zuckerberg does.
I think one of them didn't understand that, you know, it was an advertising-supported platform, for one.
But there were some very basic errors there.
So we're relying on politicians to regulate this stuff.
Exactly.
I agree with you.
It's a serious problem.
So I have been asked in the past, and who knows, maybe it's a question for this moment.
Are we heading towards a realm where even political entities are no longer that functional and we have to rely on some other form of intelligence, perhaps enhanced by AI, to begin to look at how we deploy policy directorates in this context?
I'm not sure that's possible.
I mean, maybe not at this moment in time, maybe not in our current civilization, but I could see in some other civilization elsewhere that may have in fact become the case.
In other words, the pace at which we're adapting to this sort of new emergent realm, sadly, the biology side is rather slow.
If we're not keeping pace with the actual different developments as they're popping up almost instantaneously, we're trying to use sort of old school way of looking at these processes through the lens of traditional political interaction compared to a process dynamic which I'm not sure most people even would understand even if they could be explained to them in more specific details.
And the Zuckerberg example is a good example, by the way.
That's like a small snapshot of a larger scenario.
But you're right.
We are stepping into an area where – I was once part of an organization called the Millennium Project.
And one of the reasons I was involved with that group is because their theory was to create kind of a, like a United Nations for science.
That is an organization that could look at various technological developments in nanotechnology, genetic engineering, computing, AI, all the usual suspects and other ones as well.
And the theory was, could we form some kind of an international body that could, in a sense, develop policy directorates or some kind of regulatory standards that everyone could agree on, and this would become like a template for, in a sense, moderating the world's applications in these areas.
And my response was, I thought it was unrealistic, even though I liked the idea ideologically.
I just didn't see it being very practical.
And if you look at other parts of the world, and I'll use China as an example, whatever regulatory mechanisms that might make sense here that we might tend to agree with or might think has some value in this context may be completely out of sync with what is occurring elsewhere in the world.
And I would tend to suggest to you that once again, I don't think there really is a way to use traditional sort of quasi-political means to form some kind of regulatory template that would determine how these things play out in the near future.
I just don't see that being realistically possible.
So you're saying there's a difference between perhaps us in Britain and the United States and in Europe, Australia, and perhaps certain other countries, and you mentioned China for one, who embrace all of this and perhaps don't ask and not starting to ask the kind of questions that we've just been talking about.
It doesn't figure into their thought process.
Right, exactly.
Yeah.
Wow.
And if one was to take the larger view that says that, like I was saying a few minutes ago, whether it's this world or some other world, I would suggest that many worlds have gone to something kind of like this.
And perhaps that was one of those thresholds where either the planet in question determined that they could figure out how to agree to look at these things in a more coherent way and apply some kind of regulatory mechanism accordingly, or they didn't.
And it was a kind of a wild west, if anything goes, and then the phrase, whatever evolved, evolved.
I think we're at that threshold right about now.
That's how I see it.
And at this moment, at least, I don't see any real way to prevent a kind of a free-for-all way of this process occurring.
I just don't realistically see, for instance, trying to determine if China as a nation state wants to apply the same kind of limitations that we might apply here, as an example.
I just don't see it happening.
That's my opinion.
I could be wrong, but I just don't see the planet right now being in a more coherent enough way of looking at these processes to have that kind of planetary scale regulatory mechanism that would actually make sense.
And this is all happening at a tremendous pace, and this is what the politicians, as we've said, are not really grasping, it seems to me.
So here comes a question that is going to be a hard one to put to you, and I'm sorry for putting it to you, but we need to kind of do this.
What kind of world do you think it is going to be in 20 years from now?
One thing I will say is this, that we have some very serious problems to work with, just basic biological problems.
Population density of the planet, consumption of resources, the ability to continue to supply the kinds of food and other sustenance materials that make life as we know it to be functional.
These are all sort of open question marks.
Now, whether or not we're able to collectively agree to sort of, how would I put this carefully, we get to a certain population density threshold and the entire planet begins to agree that, okay, this is as far as we can go.
Otherwise, the ability to sustain this growth rate is simply not going to work.
We have to come up with some form of agreement that says this is how we're going to manage our affairs, this is how we're going to manage our resources, and perhaps we allow some kind of an AI-driven mechanism to, in part at least, regulate these mechanisms.
That would be in a perfect world if we could get that far.
Frankly, I don't see that happening.
see something a little more like a series of contests if you will.
And life may or may not be as we know to be at the moment, but I do see a this is Nature is very beautiful to behold, but it's also very harsh in its mechanisms.
People often marvel at how beautiful it is when you walk in and you see a forest or some natural area and you look at all the different life forms and understand that, wow, this is astonishing that this actually got this far and made this beautiful thing.
What they don't see, though, is the extinction process that happened to get that far.
In other words, there were many different life forms that popped up, survived to a certain point, then when some significant change came along, either they adapted or they faded away and something else came in.
We're sort of like that, only we're accelerating the pace of that sort of evolutionary process.
So you're saying that for decades we've been used to a very safe world.
There hasn't been much in the way of war since the Korean War, Vietnam War, and the rest of it.
Sure.
It's been a much safer world.
You're saying that actually, with technology and its onward march, our survival, our safety is not as guaranteed as we thought it would be.
That's probably correct.
And we would have to, in a sense, present a case for defending our right to continue to exist here.
I mean, I know this sounds weird.
I'm struggling to answer this in a correct way.
But in a sense, yes.
In other words, the normal procedure would be if we reached a threshold where sustainability was no longer possible, then there would be a correction of some sort.
And whatever that correction might consist of, in past times, it was usually a plague or a very large-scale war or there was famine or all sorts of terrible things would happen we were subject to.
And there was a correction and the sustainability was possible over a length of time.
There'd be another correction, etc.
Okay, fine.
So we have a choice, in a sense, to either have a voluntary correction where we collectively decide to better manage our planetary affairs and continue onward, or a correction will be thrust upon us.
I think that's one way of looking at it.
This sounds terribly worrying.
It also sounds to me as if with the onset of technology and the fact that some people inevitably will have more access to it and more ability to use it, this to me sounds like the perfect stage for a dictatorship.
Well, maybe not a dictatorship, but various forms of governance, I think, are also going to be sort of an open question.
But, I mean, again, let's go back to China just for a moment.
In China now, in the major cities, Beijing and elsewhere, there are now, let's put this carefully, there's a social credit system.
In other words, depending upon how you behave, everything's monitored.
Everywhere you go, there's facial recognition, there's sort of your existence is being mapped in every conceivable way, every minute from the minute you wake up to the minute you go to sleep and you go to work and everything in between.
And now, depending upon one's behavior, whether you're polite, if you obey the rules, if you don't jaywalk, I don't know, if you don't start any fights, just any of the usual things that humans tend to do, this is now being fed into a meta-scale database.
And if your social credit system determines that you're not being a pleasant human being or if you're not cooperating correctly with the expected social markets, et cetera, this begins to cost you.
And you start having negative points.
And if you get too many negative points, your access to future employment is affected.
Your access to credit is affected.
In other words, even starting up very young, your access to education becomes a question mark depending upon how you rate in this social scoring system.
So technology becomes a mechanism of control and the overwhelming gravitational pull is to conform.
In a sense, yes.
And this is being deployed today.
It's not a theory.
It's not a science fiction novel.
This is today in the major cities of China.
It's actually become a government-sanctioned, supported, and encouraged platform.
And most young people today in China are becoming aware of this and are actually deciding how to, in a sense, game the system.
Now, this is what I find most fascinating.
Despite the efforts to create this kind of conformist platform that everyone is expected to now abide by, what's really happening is the younger folks who are very clever at this sort of thing are figuring out how to game the system, how to pretend to be polite or how to pretend to obey the rules of the moment, as it were, so they can enhance their point scores, et cetera.
So we're learning to adapt to another way of being.
Excuse me, sorry.
We're learning to adapt to another way of being that transcends the usual types of social rules that one would be expected to interact with to a new type of synthetically created set of social rules.
And this is the first, I mean, in a sense, I'm kind of fascinated by this because it's a Petri dish to watch this process take place.
But we're beginning to see the beginning edge of this process as we speak.
Okay, so the human nature, the human condition allows some people to transcend hard control.
But at the moment, we are seeing and beginning to register, we have soft control.
What I mean by that is that people who are on social media tend to conform.
Nobody wants to express an opinion that perhaps is outside everybody else's opinion.
If everybody likes this particular TV show, then are you going to be the person to say, actually, I didn't like that?
We're getting a conformity in a soft way, it seems to me.
Well, not only are we getting conformity in a soft way, speaking of Mark Zuckerberg and the whole debacle with Facebook, in a way we're kind of forcing it.
And this is why.
One of the things that got Zuckerberg in trouble, and one of the sort of current question marks, if you will, is in the caliber of content.
In other words, you have millions of people who go online, most of them would just talk about daily things or have their pictures of cats or just whatever their thing happens to be.
But you also have some people who are very extremists.
They have very, you know, sort of edgy political views or they have very unusual cultural ideas, just whatever the thing is, and they cram their stuff on the media blog sphere as well.
And then, of course, you have the whole fake news debacle, which I think is actually quite fascinating.
Whether or not the fake news mechanism becomes part of political process in the future, that's certainly an open question where this is a whole different topic, really.
But you have this sort of male of all these different factors, a lot of which does not represent by any means even the best of what human behavior could be, but kind of the exact opposite, actually.
And so the effort has been to sort of filter this out or to try to homogenize to a more pleasant form.
I'm using words sort of pleasantly here.
A more acceptable form what the sort of middle-of-the-road content is going to be on these social platforms.
And in a way, this is, I think, causing more harm than good.
Now, do I like seeing terrible things in the social media realm?
Do I like seeing examples of bad behavior.
No, of course not.
But on the other hand, it does serve a purpose.
It does allow people to see the reality of what human behavior really is, or what the range of the spectrum of the human condition consists of, extremely through the lens of these different social media inputs.
Now, if we're going to try to start moderating this and we're going to start to homogenize and smooth out the ripples, if you will, and make everything sort of a more acceptable norm, what effect does this really have?
Well, what I suggest to you is that we're, in a sense, even if it's not part of a pre-designed protocol to force conformity, we're, in a sense, sort of creating de facto conformity because of this desire to weed out or to filter out the more undesirable or more unpleasant aspects of the human condition as expressed through these different social media interactions.
In a way, we're kind of creating the very thing that you're describing, even if it's not designed and purpose as sort of a political strategy.
It's sort of becoming that way anyway.
That part of it, I actually find more disturbing than if we had, in other words, I'd rather take our chances with having the darker, sort of edgier stuff online, the questionable fake news, whatever you want to call it.
I'd rather have that noise out there because I think it's appropriate to have noise, to have psychological noise that shapes part of who we are and how we evolve.
If you begin to filter that out, then we have less really to work with.
The less variables that one inputs to the system, the flatter the outcome becomes.
And we sort of create a contrived sense of reality that is, like you were suggesting earlier, it's part of a conditioning that I don't necessarily want to see that kind of conditioning.
I think we should be allowed to have our foibles and our sort of healed behaviors because it allows people to see the reality of movie two or three.
Well, I can't say I could entirely agree with that for a whole raft of reasons.
But one thing I would agree with is that if you remove a lot of this stuff, then people will lose the ability to see it for what it is.
And because they've never seen it, they won't understand, well, this is extremist and this is beyond the pale and this is that because it's all been removed and the content has been moderated and skewed in one particular direction that may be correct or may not be.
But we're not going to be able to discern that.
We'll lose the ability to discern these things for ourselves.
Correct.
And that was the point I was supposed to make.
And thanks for having me.
Articulate in a better way.
But, you know, that to me is a tremendous worry, and I completely understand.
Can I just ask?
There seems to be a lot going on in the background around you there.
I don't know whether you're making your breakfast or riding an exercise bike.
There's a certain amount of noise.
Or maybe you've just got a very sensitive telephone.
That's what it is.
I truly apologize.
I don't know.
I give up.
But the conversation's been so fascinating that I haven't wanted to mention it up to now.
So let's proceed because I'm sure that my listener will be fascinated by what you say.
You've talked a lot about us entering the quantum world.
And a lot of people have talked to me.
They've been talking to me on radio for the last 15 years about how this is the dawning, not just of the age of Aquarius, but the dawning of the quantum revolution that will mean effectively, even if we don't quite understand what it is because most of us are not technologists, will mean that everything is smaller, lighter, faster, better.
Is that so?
Well, it's different.
Maybe that's a better way of putting this.
I'll give you a couple of examples.
In the world of quantum entanglement, and I'll just very briefly describe this idea.
In quantum entanglement, excuse me, sorry, as an example, one can have, let's say, a pair of photons, or even a single photon that's been cut in half.
It depends on how you do this.
But then you can separate those photons over a great distance, you know, many, many miles.
You can rotate one of those photons on its axis.
The other photon responds instantaneously.
No delay.
No speed of light, separation of time.
Exactly.
As if these two objects, which are in two completely different spatial locations, operate as one.
This is a radical change in the world of physics.
Now, for those that are in the world of physics, this is what Einstein himself calls spooky physics, and they're quite aware of this, of course, but for the average everyday person, it seems impossible.
Well, actually, this is not only possible, it's being deployed as a process, as a way to communicate.
Imagine now, if you will, a Internet of the not too distant future.
In fact, the first quantum Internet is going to be deployed next year, early next year as an experiment.
But imagine now, instead of when you send data across fiber optic cable from one location to the next, et cetera, it still takes time.
There's still delay, and there's still the inherent dynamics of how to get information from one place to the next.
In this case, information occurs simultaneously at all locations in a network that's set up this way.
Now, the implications for this are kind of astonishing because in something as mundane, for instance, as trading, stock exchange, that kind of thing, or currency exchange like forex, et cetera, the amount of time it takes, people spend a tremendous amount of money and effort to locate their server farms and locate their computational resource ever closer or to a more advantageous spot,
physically on the backbone, so that their trades can occur maybe a few milliseconds ahead of the other competitors' trades, you know, that sort of thing.
Well, all of that disappears instantaneously when you step into this quantum domain, when you have a quantum internet where everything happens simultaneously in real time at the exact same time.
And this may be kind of a subtle thing to recognize how it would affect the average everyday person, but certainly from a financial systems currency valuation mechanism way of looking at things, this has a radical, and I do mean radical, change.
But that's only one small piece of a kind of a larger puzzle.
Imagine now that same set of rules.
We have been conditioned to believe that we live in a four-dimensional universe.
That's what our senses tell us.
That's what our brains are designed to recognize.
But in fact, if you communicate with folks in the cosmology world, right now I think the current dimension is up to 11 dimensions.
In other words, if you look at the universe and you plot out the expansion of the universe, the pace of acceleration of Earth, everything that we can measure spectrographically, et cetera, it doesn't make sense.
There's about 95% of what we can see is actually not there.
It's what's called dark matter or dark energy.
And yet one has to go through rather elaborate mathematical calculations to sort of compensate for that and say, ah, that's why we see what we see.
So right now, it turns out if we increase the number of dimension sets, then all of a sudden what we can observe begins to make a bit more sense.
And so this was sort of an open debate for quite some time.
But in a more sort of mundane, localized way, we can now prove that there's more than four dimensions because of things like this spooky physics or this sort of quantum entanglement mechanism.
Now, one can look at the quantum entanglement mechanism as a fantastically different way of looking at teleporting information on a planetary basis.
But in a very similar way, the same type of entanglement platform, you might say, also applies to things like quantum computing.
And like I was talking to you earlier about, we have quantum computing, which is really becoming real as we speak.
I mean, there's virtually all the major players in the industry are competing neck and neck to see who can deploy their system first.
I mean, D-Wave already has a commercial platform, but I can assure you that Intel, Google, all the IBM, all the usual suspects that they have their versions as well.
So people are very bent on being able to deploy this platform and have it out there as a resource as soon as possible.
So now if you imagine for a moment, sort of next-gen AI, which has access to quantum computing as part of its resource, and that's interconnected into a real-time instantaneous teleplatform, if you will, enabled by quantum entanglement being deployed to have information simultaneously anywhere in the system.
And then you apply that towards this sort of planetary scale resource management slash financial systems process.
It's such a radical change that it's a bit difficult to describe in sort of daily terms, but it would be, there's no precedent.
There's nothing that we can compare to in previous times.
We really are stepping into a territory for which there is no comparative reference point.
And so to try to gauge the actual effects of what this is going to mean for the average everyday person, as I said, it's going to affect financial systems.
It's certainly going to affect our own biology.
It's going to affect virtually every aspect of business and sort of enterprise that one can imagine.
I would call this the quantum age.
But then there's one more piece of this puzzle, which I think is perhaps most compelling of all, at least in my personal opinion, and that is the intersection of quantum physics, biophysics, and consciousness.
And that is, for those that want to look in this direction, I tend to believe that things like telepathy and even precognition are not only possible, they actually have occurred throughout most of human history.
In fact, if one looks back into virtually all of human history that we currently are aware of, virtually every indigenous culture on this planet has had evidence of this.
They usually was confined to the chosen ones, the shamans, the monks, the special people, as it were, that trained for this or showed proclivity for this type of capacity over their lifetime and were devoted to applying this in some context, usually confined to a philosophical or spiritual kind of way of looking at these processes.
But now, now we're stepping into a territory where perhaps we can begin to magnify these characteristics and deploy them as a resource.
I mean, just imagine for a moment, if you will, speaking of Mark Zuckerberg once again, this is a person who over a year ago made a rather famous statement.
He said that artificial telepathy would be the next big thing in the world's business.
And I think he's right.
If one can imagine now for a moment, even today, just sort of rather crudely in a way, but we already have mind-machine interfaces becoming very well established.
One can acquire various types of devices where you can link your thoughts directly to a computer or to an interface.
And of course, this can go online.
We have already seen many examples of direct mind-to-mind linkage on the internet.
We've seen many examples of mind linkage to various types of mechanical systems and so on.
Well, imagine now for a moment if we step beyond that point, if we could biologically enhance that telepathic capacity.
How do you think that would affect the world as we know it today?
This is how I see it.
Most of what the human condition consists of is people trying to sell something to the people around them.
I don't care whether it's personal relationships or marketing a product or selling a political ideology or just whatever it happens to be.
Most of what we encounter are people surrounding us who are trying to project an image or trying to project an expression of some sort, you know, through body language, inflection of voice, and how they present themselves, et cetera, to convince us to believe whatever it is that they're expressing to us.
If we could see past that veneer, if we could see the true intent of what surrounding people are providing to us, that would have a radical, and I mean a radical change on how we behave.
Well, it sounds to me, I'm just considering the ramifications of this now, that if everybody knows everything and we're heading that way, a lot of things are going to become untenable from the very simple point of view of the guy who's trying to sell you a car.
I mean, he's not going to tell you everything about that car because if it's a secondhand car, a used car, it may have a small issue that won't affect you really, but he's going to downplay that or not tell you.
But if you know that anyway, then you're probably not going to buy that car.
And that means the guy who's trying to sell you that car maybe doesn't get paid next week.
I mean, that's a little silly example.
I'm with you there.
And this even gets to somewhat more tenuous areas, like guy meets a girl, and he wants to impress the girl, and he wants to convince the girl to go out with him on a date or something.
And so he has to kind of sell himself to convince the girl, et cetera, et cetera.
So, I mean, this plays into all levels of human endeavor.
However, however, since A few months ago, we were discussing our future as a civilization, our future as a species type, our sort of co-evolutionary symbiosis with intelligence other than our own, et cetera, et cetera.
Maybe this is part of that path.
In other words, we were discussing a little bit about, well, can we come up with some kind of regulatory mechanism or some kind of policy directory, some kind of a way of establishing what should be deployed or not deployed, to what extent should various technologies be allowed to proliferate in certain directions, that sort of thing.
It's my opinion, just my opinion, that if we saw intent directly, then the vast majority of the conflicts and the problems that we see and the really bizarre and negative behaviors that we see with various extremist groups and religious beliefs that cause really horrible behaviors,
et cetera, a lot of this would disappear because we would suddenly no longer be driven by interpretations of what we think are things that we're observing or our belief systems.
We would see it directly.
We would know what the intent is and therefore not be driven to the kinds of negative behaviors that we display now, much of which is because of a misinterpretation of intent to begin with.
And if you think of the current dynamic between Donald Trump and his administration and North Korea, then if Donald Trump knew that Kim Jong-un perhaps didn't have the nuclear capability that he said that he's got and it didn't work quite as well, then he wouldn't worry so much.
But equally, Kim Jong-un might also understand simultaneously that Donald Trump may talk about launching ultimate force if North Korea completely gets out of hand.
If those two things are simultaneously appreciated by both sides, you might actually get an easier compromise.
There you go.
And so that's a somewhat unusual way of the metaphor, way of looking at the concept, but I would certainly agree that as a model, yes, then this could be played out to all levels, whether it's the individuals, societal groups, entire nation states, et cetera.
In other words, I think that in other worlds, once again, that this telepathic boundary was actually crossed.
It was part of their evolutionary process.
And once they crossed that telepathic boundary, then suddenly they had a complete shift or a completely different way of managing the world's affairs.
And that may be what's possibly relevant here.
Now, I will suggest to you that, again, just stepping into sort of territory where physics and biophysics and quantum physics sort of mixed together in biological systems, there's an entirely new science called quantum biology, which has become kind of a thing.
In fact, we're discovering that there's various aspects of quantum entanglement that occur in both plants and animals.
I mean, it's really quite a process.
It's no longer a woo-woo sort of thing.
It's becoming de facto real science.
And so couldn't make an argument that says that if we understood better how this sort of telepathic capacity might really work between living systems, certainly with human beings, is there a genetic marker for this?
Maybe.
You know, I just opened it as a question mark.
And if so, if we're stepping into the territory of genetic modification, like I was saying earlier, using the next generation of computational resources to come up with ever more robust ways of creating new genomic content, et cetera, and we begin to modify ourselves as a living thing, would part of that modification process be the enhancement of telepathic capacity?
And if so, if that became the case, would this be evenly distributed to everyone?
That's an open question mark.
But in other words, once we step into the territory, I'm just suggesting as a theory, that this telepathic capacity could be enhanced through genetic modification, and this became an accepted norm, at least for some, that may be a necessary step to get us around the corner of an otherwise seemingly impossible dilemma, as we were discussing earlier about this idea of population density of the planet and sustainable resources and political processes going amok, et cetera, et cetera.
Maybe this is the requirement.
I'm not saying I know this is a de facto answer.
I'm just saying as a suggestion it could be.
And if one wants to use that as a sort of a comparative reference point, then that to me would signify the ultimate example of stepping into the quantum age.
Wow.
I've absolutely loved this conversation.
There is so much more we can talk about.
I knew there would be.
Here's another one of those crazy, dumb ballpark questions, but maybe it's a good one to end at this conversation anyway, Charles.
Is the future a good place?
Is it going to be?
I think it could be.
I certainly, if you look backwards, I mean, I have a lot of friends, by the way, who opine endlessly about the wonderful days when things were organic and people lived out on the farm and whatever.
And I'm thinking, well, wait a minute, not so fast.
I think it's beautiful to look at in a certain kind of way.
But if you look at medical science as an example, I would not want to be stuck in an earlier time when medical science was very questionable at best.
I would suggest, I mean, to me, on a personal basis, I'm just incredibly curious.
I've been curious all my life.
I'm kind of one of those ADHD kids that's interested in everything at the same time all the time.
So I just want to see what happens.
And I sort of sense that we really are at a kind of a cusp, a sort of a threshold, like an evolutionary event horizon of sorts.
So I want to just be here long enough just to see what happens.
Now, whether it's, quote-unquote, better or worse, I think that's a value judgment that many people have somewhat different ways of determining.
I think it could be better.
I think it's much better, actually.
But it will be very different.
And just for that sake alone, I just want to see what the outcome is.
And at the very least, we need to start having these conversations that I hear so few people having, because this is the future.
If we want to have a hope of shaping it rather than it, you know, we've talked at various points in this conversation of the technology and the way things are going shaping us.
If we want to have any hope of shaping the way it looks, we've got to get with the program and get involved now, it seems to me.
I would agree with that, 100%.
Charles Ostman, thank you very much.
If people want to read about you, and we fought the technology and won with this conversation, haven't we, Charles?
If people want to read all about you, what's your website?
Tell me your website.
Okay, my website is...
This is a monkey fight I was getting to making.
Okay, so the website is historianofthefeature.com or historianofthefuture.org.
Either one works.
It's so funny, isn't it?
Because here we are.
I'm in London.
You're in California.
And I'd look forward to having a nice, crisp, digital conversation with you.
And the technology beat the both of us.
And we're both technological people, which just goes to show that we don't quite know everything yet.
It's the perfect punctuation point for the program.
It sits perfectly.
Absolutely.
Well, I hope we can have another conversation, and I'm looking forward to it already, Charles.
Thank you.
Absolutely.
Have a great day.
I'm really pleased that after all these years, I've had him on my show.
Please let me know what you thought about him, and we'll certainly try and get him back on the show.
I will put a link to Charles and his work on my website, theunexplained.tv.
Please keep your emails coming.
Keep your donations coming if you can.
Go to my website, theunexplained.tv, and you can contact me.
And while you're passing through, maybe leave a donation for the show, too.
Thank you to Adam at Creative Hotspot for all of his hard work on this show.
And above all, thank you to you for listening to it.
More great guests on the way here on The Unexplained.
So until we meet next here on this show, my name is Howard Hughes.
I am in London.
This has been The Unexplained.
And please stay safe.
Please stay calm.
And above all, please stay in touch.
Thank you very much.
Take care.
Export Selection