AI Mind Reading: The Next Step In Total Surveillance
|
Time
Text
That brings us to something that Lance found that I thought was very interesting.
And that is the brain interface transference here that is a company that is called, hang on a second, I'll get it right here.
Brain IT.
Brain IT is their thing.
And they're not the only company that's doing this.
There's a lot of different companies that are doing this.
And let's show people what this really looks like.
Scroll down and show, zoom in on those pictures.
Now, there's pairs of pictures, and you'll see an image that the person is looking at.
It says scene image.
Right next to it is the reconstructed image.
And look at that.
There's a giraffe.
And then right next to it is a giraffe.
But the giraffe is standing in exactly the same position and same way and looked at from the same angle, looking kind of back over its shoulder.
To be clear, the scene image is what the human is looking at.
And then there's sensors connected to the brain that's creating the reconstructed image.
The computer hasn't seen this scene image.
Only the human sees this, and this is entirely constructed from a brain scan.
That's right.
So they can sense what you are looking at and completely reconstructed.
And look at how identical these images are.
Now, you've got a stop sign, and it got a stop sign as well as the word stop.
The only thing that's missing there is the four-way thing underneath it.
It didn't quite reconstruct that exactly.
And then when you look at the pieces of pizza, it is a little bit more orderly in the way that it put the pizza together that's there.
But even when it gets some of the details wrong, it still has the basic orientation there.
Scroll it up a little bit, the snowboarder that is there.
Take a look at the snowboarder.
So here, the basic orientation is right.
Even though the snowboarder has one leg up, the arms are still extended and still in basically the same orientation.
It's going down the snow with a shadow that's main cast.
But it is truly amazing.
Yeah, show the baseball one.
That's another good one that's there.
So the baseball thing, you've got three different people, and they're all basically in the same orientation.
The one again on the left is the actual picture that the human is looking at.
The one on the right is the reconstruction by scanning his, by monitoring his brain, and then the computer is reconstructing that one on the right.
And so you've got a catcher who is squatting and he's got one arm extended out, and that is captured again.
And then the umpire behind him, who is in the same crouching position, even though the colors change a little bit.
It still has that there.
And then moving up to the room, the motel room, look at that.
It even has the same color bed spread there.
And the one above it, where you have the motorcycle still in exactly the same angle, and it figured out there's a person on a racing motorcycle, even though it got the colors slightly different on that.
Truly, it's amazing.
Interesting to me because it's little details that it gets wrong that if you were to remember this image, you would probably get a lot of these same details wrong, like exactly the color scheme of their clothes.
But it still gets the general color scheme across all three of them.
Yeah, the three people saying they're for the skiing thing.
And again, the jet, the military jet, it gets a little bit of the details on the bottom that are different, but it basically has it all there.
So it is pretty much getting the gist of it, just as Lance said.
You would remember that when you come back.
Now, what is interesting about this, I think, is the fact that it's not just one company that's doing this.
There are at least 11, let's say a dozen companies that are out there.
I bet you, we didn't look this up, but I bet you every single one of them has got grants from DARPA or some federal agency, most likely DARPA, in order to do this kind of stuff.
What is the use case for something like this?
And how did they put it together?
Well, this particular company is bragging about how superior their method is.
They use F MRI, FMRI, the MRI scanner that you have.
They put you in the machine and scan your brain and things like that.
I had several of those done.
This is functional MRI.
And what it does, instead of looking at the structure of the brain and seeing, are there physical alterations to the brain after a stroke or something like that?
It looks at changes in the brain that are happening dynamically over time.
And so that's what the functional MRI is about.
Rather than looking at the physiology or the structure of the brain, it's actually looking at the dynamic brain activity.
And so to train these models, one of the things that this company is bragging about is that they spend about an hour training it, and their competitors might spend 40 hours training it.
And they get far superior results.
It truly is amazing when you look at how long they spend training it and how much better their recognition is, you know, being able to sense what you are seeing and thinking about and basically reading your mind.
And so it is the brain interaction transformer.
They call themselves BIT.
Now what they do, what is the training?
Well, it turns out that everybody has these localized patch level image features.
And so they call them clusters.
And so they're looking at brain voxel clusters.
And they say, all humans have this, but these clusters will be located in different places on different subjects.
Same thing, but it'll be slightly moved around.
You know, when you have a stroke, they call it brain plasticity.
And so when you have a stroke, part of your brain dies.
And if you get the functionality back, it's because another part of your brain has taken up that activity.
They said, so some very, very young children, maybe in infancy, might have a stroke that would affect, for example, their speech.
And what they found is that even though that might reside on one side of their brain versus the other side of the brain, those young children, when they have the stroke that affects the side of the brain where normally speech would be, they found that as they learn to speak, the other side of their brain picks it up.
And so that's what's called brain plasticity.
In other words, it can adapt and train that other side of the brain to take over those functions.
So that's what they're basically looking at here with these voxel clusters.
They know that certain things are going to be fired.
They just don't know exactly where that's going to be in a person's brain.
So they spend an hour mapping those things out.
And then they get very, very accurate results.
And what they do is they split it into two different aspects.
One of them is the semantics.
And I think what that does is kind of give them a context.
So when you look at how, oh, you've got two people standing and they're kind of standing in this particular orientation, picks up that.
And then the other one is more about the details that are there.
And then they run these two different paths together.
So first they have programs that are looking at the voxel clusters, creating a kind of semantic context.
The other one is creating a context for the features.
And then they take the output of those two things and put them into something else that combines and sums those things together to give them that kind of image.
It's pretty interesting in terms of technology that is there.
But I think it is absolutely abhorrent that they're doing this.
I can't think of any reason. for them to do something like this.
Now they'll come up with some kind of a fake justification, just like they're talking about with the creating babies with a hatchery.
Oh, well, we'll do it to save people from some kind of genetic disease.
And they're leaning into that excuse, leaning into that narrative by calling their company preventive, right?
But these are the kinds of things, you know, when you look at this, actually, you know, Lance, pull up the one that says it's titled Brain Interaction Transformer.
And when you look at that chart, you'll see that in their chart, when they're talking about the cross-transformer module, they've got that listed there twice.
And guess what?
They misspelled Transformer.
I'm being a little bit of a grammar Nazi here, but I got to just say that, you know, we're talking about things like this.
The little details matter.
And I wonder what happens when you switch some of the stuff and you're reconstructing things, and it's a critical mission.
I don't know.
To be honest, this sounds a lot more like a Decepticon ploy than the Transformers to me, but what do I know?
Yeah, that sounds pretty crazy to me.
Look at one last one here, and that is comparing their images to these other models that are out there.
Their company is called BrainIT, and they compare it to some other companies, Mind Turner, MindI2, Neuro VLA.
And so look at this.
You're the best mind reader on the market right now.
Yeah, that's absolutely right.
The interesting one, I think, is the last one, the Neuro VLA, because it always gets the object correct, but it gets it in a very different context.
Yeah, that's right.
Yeah, so that first row there, you're seeing a bowl of some white stuff.
Maybe it's oatmeal or something, and you're seeing a banana right next to it.
And then when you look at the neuro VLA, they've got a bowl, and then they've got a banana, but it's not at all in the same orientation.
And Brain IT was able to do that.
And you see that repeated over and over again.
They kind of get some of it, but they don't get all of it.
And, you know, it's kind of interesting.
What it reminded me of was this.
Mr. Vinman, Ghostbusters.
Good guess, but wrong.
Heard that from Bill Murray in the mind reading thing.
Opened up Ghostbusters.
I wonder if they shock these people who created these models.
How they get it right.
Tell me what you think it is.
Is it a star?
It is a star.
That's great.
And yet you can see from behind him that it wasn't.
Think hard.
Circle.
Close.
Square.
Definitely wrong.
Okay.
All right.
Ready?
What is it?
Figure eight.
Incredible.
That's five for five.
You can't see these, can you?
No, no.
You're not cheap.
That's not what I want.
I swear they're just coming to me.
Okay.
Nervous?
Yes.
I don't like this.
You only have 75 more to go.
Okay.
What's this one?
Just a couple of wavy lines.
Sorry.
You got it right.
Um, we're here, um, we're here, we're, I, it's not, I, ah, ah, ah!
I'm getting a little tired of this.
You volunteered, didn't you?
We're paying you, aren't we?
Yeah, but I didn't know you were going to be giving me electric shocks.
What are you trying to prove here anyway?
I'm studying the effect of negative reinforcement on ESP ability.
The effect?
I'll tell you what the effect is.
It's pissing me off.
Well, then maybe my theory is correct.
You can keep the five bucks I've had.
I will, mister!
Keep the five bucks.
I wonder why they pay these people to go through an hour of MRI.
It's the kind of resentment that your ability is going to provoke in some people.
Yeah, so yeah, that's kind of interesting.
But now they're doing it for real.
Okay, they're going to use AI to read people's minds.
And again, when they list out a table and they compare themselves percentage-wise to these other people, you see that there are 11 of these companies that are out there doing this stuff.
And who is paying them?
I bet it is some evil organization like The Common Man.
They created Common Core to dumb down our children.
They created Common Past to track and control us.
Their Commons project to make sure the Commoners own nothing and the Communist Future.
They see the common man as simple, unsophisticated, ordinary.
But each of us has worth and dignity created in the image of God.
That is what we have in common.
That is what they want to take away.
Their most powerful weapons are isolation, deception, intimidation.
They desire to know everything about us while they hide everything from us.
It's time to turn that around and expose what they want to hide.
Please share the information and links you'll find at the DavidNightshow.com.
Thank you for listening.
Thank you for sharing.
If you can't support us financially, please keep us in your prayers.