WTW45: The Brophy Study Is Wrong In Nearly Every Possible Way
|
Time
Text
What's so scary about the woke mob?
How often you just don't see them coming.
Anywhere you see diversity, equity, and inclusion, you see Marxism and you see woke principles being pushed.
Wokeness is a virus more dangerous than any pandemic hands down.
The woke monster is here and it's coming for everything.
Instead of go-go boots, the seductress green Eminem will now wear sneakers.
Hello and welcome to Where There's Woke.
This is episode 45.
I'm your host, Thomas Smith.
That over there is Lydia Smith.
How you doing?
Hello, I'm doing pretty well and somehow surviving the craziness of all of this.
Yeah, we are two members of the three-member panel of The Master's Thesis.
What do you call it?
Yeah, panel.
Master's panel.
Panel!
It's been a while, okay?
I don't think you actually have your masters.
Have you been making it up all this time?
Yeah, I have to prove it.
It's just called the three panel panel.
Panel threes.
That's why we had to call in an expert.
Yeah.
So we're going to be speaking with Dr. Janessa Seymour, who is an expert in exactly what we need for this.
And oh my God, this study is so fucking bad.
It is so bad.
It's crazy.
And I just can't get enough.
This drove us insane.
And I can't get enough of proving how bad at science Jordan Pearson is, because he not only, again, was in charge of this, he also boasts about it.
There's so much, though, that we are saving extra for patrons only.
Believe it or not, this was me doing my absolute best to limit how much we're talking about this fucking Master's thesis.
But it's so good.
There's so much.
When something is so terrible in a complicated way, it takes so much time.
We're putting some of it on patreon.com slash where there's woke.
For those who are extra keen on scientific dunks on Jordan Peterson, so make sure to check that out if you'd like to.
But in the meantime, let's get on over to our expert and we'll hear more about why this thing sucks.
I also did want to note, so when we first recorded with Janessa, we didn't really know what we were going to be doing with it, you know?
And so I thought we were just gonna Get some background information.
Like Janessa didn't even know she was going to be like on the podcast, but she was so good.
We were like, well, we have to use this.
But I only note it because I ended up talking a whole lot.
Cause what I wanted to make sure to do was like express all kind of the thoughts I'd been having on this study as a non-expert and just make sure I wasn't saying anything dumb.
Right.
And so again, it was going to be like an on background interview.
And so I, I guess I feel a little self-conscious that like, I don't normally get an expert on and then just do like a bunch of talking in front of them.
But the reason I was doing is because again, we were running stuff by her as a background interview.
It ended up being so good.
She was excellent.
So we decided to use it, uh, just to comment on that.
If it seems like I'm talking too much, that is why, but don't worry after I stopped talking, Janessa really takes us through some great stuff.
So let's get on over to that.
Janessa, if you don't mind introducing yourself and telling us your qualifications.
Yeah, so it's Dr. Janessa Seymour, I guess, former neuroscientist.
What happened?
I was working in academia for a while.
There's a very long version or there's the short version that's just like, I had tenure track options in front of me that were going to require some sacrifices.
I had non-tenure track options that were super fun, But those would not have job stability, and eventually I was just like, this is not the life that I would like, so now I work at a law office.
Yeah.
Yeah, you went to law school, right?
Yeah, yeah, I am taking the bar in July, so... Oh shit, that's cool.
Yeah.
Well, it's a whole separate conversation, but the point is...
Relevant to our discussions today.
Right.
Yeah, I have a PhD in neuroscience, bachelor's in psychology, cognitive neuroscience, so really it's related to this kind of stuff in terms of I've taught statistics for behavioral sciences for a while there, which is basically, yeah, if you are a psychologist or somebody in the behavioral sciences, How do you pick your statistics?
Which ones are appropriate given the type of study you're doing?
So kind of exactly this thing.
I have some problems.
Yeah, I solicited some help from the Facebook group and you were kind enough to message me.
I'm so glad you did.
My first thing I want to ask you is can you please justify to the audience that this is insane and it's not crazy for me to be obsessed with what is happening here?
Cause like, I thought I was, I'm reading this.
Here's what happened.
I started really digging into this master's thesis.
I decided to, you know, take a bunch of like fucking online courses and stats and factor analysis because I knew something made no sense about this.
It made absolutely no sense.
I came to some conclusions.
I consulted you about it.
I was able to get a lot of stuff right.
It just took me like 20 hours, whereas it took you like three minutes.
And then you dug even deeper into it.
And I hope that you can forgive me for putting this nonsense into your brain because it is truly impossible to stop thinking about because it's either just abject lies or like I don't know.
So I am very curious to hear what you made of this master's thesis from 2015 University of Toronto that we have to talk about because it's the basis of all this misinformation and it's something that Jordan Peterson is leaning on in public Nazi appearances, Nazi YouTube channel appearances to make just wide ranging claims about us essentially.
And this study is just terrible.
It's absolutely terrible.
But here's the other thing, and then I'll let you go.
I'll let you go off.
I started getting so critical of this study that I was like, unlike, I think, Jordan Peterson, I have a bit of humility in thinking like, okay, if I'm thinking something is this wrong, I probably am wrong.
Like if I'm coming in here, I don't have a degree in this.
I don't know what I'm talking about.
You know, I'm coming and I'm looking at this thing.
And I, just a guy, am like, this is so... It's not just like I have some critiques, like this seems so terrible.
I was like, I have to consult an expert because I'm not going to come out that strong as a non-expert.
So am I missing something or is this just nuts?
No, it's bad, man.
And I, God, I felt the same way.
Like at first I was like, okay, Between you and me, it's pretty bad.
Because I feel terrible just publicly dunking on some random master's thesis.
That's exactly what I told Thomas, too.
I was like, man, is anyone pulling up my master's thesis and ripping it apart?
Yeah, to which I said, you're not using your master's thesis for Nazi propaganda, hun, so I think you're fine.
This is the problem, is as soon as I was like, okay, it's fair game once it's being used for nefarious purposes, and it's being used in a way that is just not relevant to what it even says.
And yeah, definitely don't blame yourself, because I mean, this is my bread and butter, and I have spent days looking at this.
It's impossible!
Yes!
You can't look away.
No, for real.
I think I sent you the document where I had to print it and handwrite numbers on it to be like, this is question number one.
This is question number two.
Why is this one question four and five, but on this other sheet it's four A and four B?
They have nothing to do with each other.
Why are they parts?
I don't understand.
Or option C and D, but there's no option A or B. Yeah.
And that's what makes the count get off.
And so by the end of it, you've lost count.
And I was losing my mind.
And I don't think that's a coincidence.
I think that kind of sloppiness is appearing throughout.
And it's the kind of thing that shouldn't have made it out the door.
This looks like the first draft, and I don't know why it's on the internet.
Well, and this is where maybe Lydia can help us out because Lydia has a master's in psychology and I keep asking her.
Another reason why I think this is fair comment is like, this really reflects poorly on Jordan Peterson's comprehension or teaching skills or whatever, because what would have happened if you submitted any of this garbage at any process and you're getting a master's thesis?
Like, wouldn't someone have told you this is garbage?
I would have had so many, the track, what is that?
Oh, track changes.
Track changes would have been lit up on my paper from my advisor.
And he's not famous.
He's a really, you know, great professor.
And I learned a lot from him.
But he also had expectations.
You know, like if you're about to get a degree, it needs to mean something.
If you're about to present a written publication, you know, a thesis of your time here, he was like, it should reflect what you've actually learned.
It should reflect truth.
It should uphold, you know, those expectations as a researcher, as a scientist in psychology.
And especially because a lot of the folks in my program were looking at applying to Ph.D. programs.
So there was this additional thing of like, you know, like you really need to get your feet wet here and get comfortable with doing this.
And it's going to help make you a stronger candidate for Ph.D. programs.
And I'm curious if there's some element here because this was like a master's to PhD ramp that she was in.
Like, I don't know that it's entirely Jordan Peterson's fault or if it's a little, I don't know, lazy of the program in general because if you were admitted to the master's program, as long as it was adequate You know, your performance with your grades and everything, then you continued on to the PhD program.
You don't have to apply.
So I wonder if it was like less rigorous to an extent as a result.
But anyway, my advisor would not have let me get away with that at any point in the process, even first draft probably.
I still have a theory that she didn't get her master's because this thing sucks so bad.
I don't really think that's true, but like maybe?
I don't know.
If it was published on the school website, does that mean she successfully defended it or whatever?
I don't know for sure.
I mean, I imagine maybe every school is different, but I think my school, had I not successfully defended my thesis, I don't think it would have made its way into publication.
Like your advisor has to sign off on it.
And there's more than just Jordan Pearson as an advisor.
That's another thing that made me doubt myself and made me want to make sure we consulted another expert because like, how the fuck?
Okay.
What I was prepared to come in here and do.
Now, in the first episode we talked about this, hun, I came in pretty strong that the PC authoritarians were conservatives and the PC egalitarians were liberal.
Now that I have gone into massive depth on this pile of crap, That's not exactly right, so I do want to offer a bit of a correction, but it's also just stupider than that.
Like, it's worse.
Yeah.
Because here's my interpretation, and fortunately we have Janessa here too.
You can be my master's thesis advisor, Janessa.
See if I pass muster here.
The way she talked about it in the video, the way both of them talked about it, is The PC Egalitarians and the PC Authoritarians were like two ends of a spectrum.
She literally said like, well, on one side, we have the PC Egalitarians and what they do is they blah, blah, blah.
They're like, you know, a women's studies professor and then we have the PC Authoritarians and they do all this stuff and then the PC Egalitarians justify what they're doing.
And one thing Lydia kept telling me A number of times.
To your credit, you're 100% right, hun, was that doesn't make sense as a way to talk about this study.
Like, you were saying that, like, you could classify the questions that way, but it would be weird to classify the people that way.
And the problem was I kept saying, well, you know, that doesn't exactly seem fair to me just as a lay person, because if you did some sort of polling about politics and you found two groups of people and were like, yeah, over here there's the Republicans and over here there's the Democrats, it seemed to me that that would be fine.
And I think, correct me if I'm wrong, hun, but I think because it's been, you know, over a decade, I feel like you didn't have the words to say why I was wrong, but I was definitely wrong.
Like, but I was just trying to understand it.
Yeah.
Yeah.
I think we had like a communication challenge because it has been a long time since I've I've lived and breathed this kind of statistics, you know, not just sort of like your everyday sort of statistics.
And I was like, I can't figure out how to explain to you what I'm trying to say.
And so thankfully we have Janessa here.
Well, what you actually told me first, because there's layers and layers and weeks to this fucking story, because again, I can't stop thinking about this.
First, you're like, can you just watch some videos on factor analysis?
And I was like, yes.
Was going to do that anyway.
It's something I want to know anyway.
Like, I like being able to read studies better.
I want to be able to read studies better.
So then I went on a bit of a deep dive of learning factor analysis and watching all the courses I could.
And so what I was going to come and tell you fine folks is, yeah, no, it doesn't at all make sense to classify those groups and talk about those groups of people that way.
And I would make a comparison.
It would be like saying, okay, over here you have the Extroverts.
And they are so-and-so, and they have these qualities.
And then over here, you've got the night people.
And those people over there that stay up late at night, they have these qualities.
To which you might think, like, well, couldn't you be both an extrovert and a night person?
Or, by the way, neither an extrovert nor a night person.
And I think, based on my understanding of factor analysis, That should have been more how she was talking about it because these are two separate factors.
One thing that it was hard for me to understand as a layperson is what you do with factor analysis is you're finding, and you can put this into better words, but you're finding the hidden underlying variables that are explaining your data in a way that kind of reduces the number of variables.
So if you have like 10 questions and you're able to find kind of one hidden variable that predicts it better, that'd be super cool.
And maybe that variable is Politics or something.
And so what I didn't realize was what you would have is you would have someone who's high on your factor or maybe someone who was low on your factor, but you wouldn't have two different factors be like the two different groups of people.
Is that right?
Yeah.
No, that's exactly right.
Like if I, politics, for example, would be a great one.
If I was giving questions and I was like, oh, this is weird.
I'm getting a weird distribution.
I wonder what this is.
And I ran a factor analysis, and all of a sudden I get this bimodal distribution, and it looks like, oh geez, people answered a bunch of abortion and gun control questions this way or that way, and I'm like, oh god, it's politics, right?
I'm getting the Democrats are over here and the Republicans are over there, so the factor would be politics.
And you could either think of it as a spectrum of belief, or sometimes things are so different that you could probably call it categorically Democrats over here, Republicans over there.
But yeah, it's existing on a spectrum.
It might be a nice normal distribution where the tails are rare, the center is the most common.
It could be a bimodal distribution like politics, where people are tending to center on Everybody's over here, or everybody's over there, and there's a couple of weirdos.
That's unfortunately how it tends to be.
But yeah, the factor wouldn't be Democrats and Republicans.
Democrats as a factor, Republicans as a factor.
The factor would be political parties, and your score along it would be your partisan ranking.
Because it wouldn't make sense to have two factors and then you could be high in democratism and high in republicanism at the same time.
That wouldn't make any sense.
And that was something that, again, as a layperson, I just truly wasn't grasping.
And despite Lydia's attempts, and she was absolutely right.
And so there's multiple levels of why this thing sucks.
There's actually tens of levels of why this thing sucks.
And so the challenge here is figuring out where to start.
But I want maybe a place to begin here before we get a real detailed glimpse into the just absolute horrors of stats that they're doing here.
The war crimes of statistics they're committing.
Maybe we should talk about what the three of us might do if we were trying to do this study.
I think what people inevitably do when they hear people talking about a study like this is they are probably thinking in their mind what it should be based on their making sense of it.
Like, I just assume that the way she would be doing it is doing a questionnaire that's going to test on, you know, politics and it's going to test on political correctness.
And then you're kind of, because there would be an interesting study there.
I would think, correct me if I'm wrong, I would think the way to go about this, if your theory was political correctness is a thing and I want to measure it.
Okay, sure.
Let's try to measure it.
So, I would think an important component to that would be a political questionnaire, especially because Christine Brophy believes that PC equals left.
She believes, and Jordan Peterson believes, going into this, you can see in her writing, she's like, PC is a product of the left, essentially.
And so, it seems to me, not a scientist, but it seems to me, the first thing you'd want to do is, okay, well, I better be able to distinguish Between are people PC or are they just left?
Like, if that's the same thing, then what am I even additionally testing for?
So, I would think you'd want, like, maybe if you have that two-factor analysis thing going, you form your questions in such a way that you hope you're getting two factors, one of which would be, like, political views, as we already kind of covered, and the other might be some sort of PC scale, but it would have to be kind of agnostic as to content, I would think.
You'd need to test, like, how sensitive are you to people using words you don't like?
And you would want to form a battery of questions that was vague, you know, or that was neutral.
Or, if you had politically loaded questions, you'd want to make sure you had enough either way to maybe, like, still group people differently depending on their politics.
Does that sound right?
Or what would you add to that?
Yeah, no, I totally agree.
I was thinking that the whole time, like, I was kind of disappointed that the study that could have been, right?
There is, there are so many interesting questions hiding inside of this.
And it's worth asking the question of, like, is it helpful that, you know, we police our language in some ways and not others?
And, you know, I think you've had people on here to talk about that before, and, like, there is research on that.
What ways do people prefer to do it?
Who is more likely to police their language in what ways?
How do they feel about their own language, other people's language?
Do I not like it when you say it, but I'm okay with me saying it?
Those are really interesting questions.
And instead, we got this.
I honestly think I wouldn't even do factor analysis.
I'd probably just start with, I would come up with a list of questions that, like you're saying, I would try to come up with some that are You know, I'd have to think carefully about what would be nice and neutral and not activate anybody's political beliefs.
Honestly, I mean, I'd probably try to sit down with some people of different beliefs first to do a survey type of deal.
Let me ask you, what do you think?
What's a term you don't like to hear?
What's a term that you think other people don't like to hear?
In fairness, I think she did that when it says she did that with like some other people in the school.
Like we formed this battery of questions based on our perceptions of the media and what they think politically correctness, right?
It was something like that.
Yeah.
Yeah.
And then consulted with staff and other grad students for their thoughts and opinions on that and adding to it to get to the truth.
But boy, did they fail.
Yeah, it clearly does not pick up, or I guess in a couple of places it does pick that up, right?
Actually, I would argue in some places it weirdly picks it up a lot.
I learned more slurs for Irish people in this paper.
Things I've never heard.
Here's the struggle in organizing this, because every time we look more into it, there's more reasons it's wrong.
So I don't know where to start, because on one hand, you've got a couple of things going on.
You've got the factor analysis and the fact that that makes no sense what she did.
I also want to hear more about that and how you're saying you wouldn't necessarily do factor analysis, because that's news to me.
I don't know how any of this works.
So I would have thought that you always do that.
I don't know what that means.
So I'd love to learn more about that from you.
But do we need to separate kind of the problem of also the questions she came up with suck and don't measure the things she thinks she's measuring?
So should we start with... I think so.
What if she were measuring the things she... Are you agreeing with that approach, hun?
Yeah, but I also agree with your statement there that her questions don't measure what she thinks they measure.
Oh, 100%.
I feel like, are those two distinct issues that maybe we can tackle one at a time?
We could first start with like, okay, let's assume her questions are actually capturing something and they're well-worded and they're actually well-formed, which, side note, spoiler, they're not.
They're absolutely terrible.
As you say, maybe some of them are serviceable, but like, they're not capturing the thing.
But let's assume she was maybe asking okay questions.
Tell us about this complicated So, okay, if we were doing this the normal way, they've got their 203 questions, and let's just pretend they were fine.
thought I understood some of it, but like it really escaped me eventually.
So, okay.
If we were doing this like the normal way, they've got their 203 questions and like, let's just pretend they were fine.
What I would do if I were going to be as agnostic as possible, I was going to not mess with my data, I would just run the analysis and I would let it pull out How many factors, computer, do you think there are?
How do I put these questions into bins in the most optimal way?
So I am making bins where the questions within it are highly correlated with each other and they're not highly correlated with questions in the other bins.
And if that came out with 203 bins... Not good!
Yeah, that would not be good.
That would mean all my questions measure completely unrelated, different things.
If I came out with one bin, I just asked the exact same questions over and over again.
Well, oh, okay.
Is that, so they all correlate, meaning they could be the same question, but they also could be things that like are very highly correlated in some reasonable way.
Yeah.
Yeah.
So, um, it's not necessarily a bad thing.
It could just mean I have hit on one thing.
I am trying to think how this could have, yeah, yeah.
One really big construct that is hiding in this data and There's some major thing, there's probably a million factors influencing human behavior all the time, right?
But we're trying to model this, the phrase I was taught about this is, all models are wrong, some models are useful.
It's always going to oversimplify it and miss something, but it could usefully tell me something about people.
And maybe there's this one factor that is doing a really good job of explaining how people answer these questions.
And your score on a spectrum on this factor would tell me how you tend to answer these.
And then I gotta look at it and look at What questions go on the high end of it and what questions go on the low end, and try to name it.
What is it?
Oh my god, that's such an important point that I meant to say.
This is another thing that is going to sound so stupid to you two experts.
I didn't even think about the fact that, like, Yeah, what you're doing is you plug all this stuff in a computer, you know, you say, computer, feed me, do factors.
And it's the computer's just running fucking numbers.
It's not the whole part where you're assigning a value and names to these factors.
Obviously, this is true if you think about it for five seconds.
But I think, again, to a layperson, it's not something that I immediately thought of, which is, oh, yeah, she's just naming these factors what she wants to name them.
There's no reason.
She could have named it anything.
She could have named it left handedness and right handedness.
And it would be like, obviously, you would want to name it something based on the actual questions.
But there's nothing in the stats that makes her pick a name that actually reflects what the factor is that she's measuring.
So, the raw statistical analysis can tell her, hypothetically, oh, you've got these two beautiful factors.
And then it would be up to judgment to say, well, what are those based on the questions you could kind of, and this is a good question for you, Janessa, what's that process?
Would you, let's say you did this, would you look at the questions and kind of measure Oh, okay.
So this question five is very consistently this factor over here.
Question seven is very consistently this factor over here.
And would you kind of divide them up and then maybe using that method, put a name to them?
Is that how you would try to name them?
Or is there some other method you would use?
That's exactly it.
And actually that kind of leads me into a perfect example here.
It'll also lead me into something I'm going to come back to because, so she had, Either 41 or 40 or 20 or 22 attitude questions.
I don't know.
I tried.
But according to table three, there were 22 and 20 end up surviving the factor analysis.
They usefully fit into a factor, right?
And so these attitude questions, when they forced it to go to four factors, I'll also come back to that, One of the factors that pops out is punitive justice.
So when we ask you your attitudes about all these different things, oh, this is interesting.
Here's four questions.
So like, okay, you had to rate your agreement, right?
And I'll give you the four whole questions create this attitudes towards punitive justice.
That's You're right to laugh, that's also a problem.
Alright, so here are the four.
See if you can put this together, why you might have called this punitive justice.
Okay.
The first one was, when a charge of sexual assault is brought forth, the alleged perpetrator should have to prove his or her innocence.
Does it specify if you're in court, or does it specify where you are?
I can't remember.
That's the whole question.
Oh, okay.
I thought there was... Was there one that was, like, focused on the university?
Yes.
Yeah, there's some that are... For that one, it doesn't say, like, what we're even talking about.
It just says... Yeah.
Okay.
Gotcha.
So, this one, all it says, when a charge of sexual assault is brought forth, the alleged perpetrator should have to prove his or her innocence.
The second one that fit in that category, if a student is suspected of sexual assault, he or she should be suspended pending further investigation.
Number three, an allegation of sexual assault is to be rejected if there is reasonable doubt about the guilt of the alleged perpetrator.
Number four, it is wrong to criticize the status and rights of women under Islam because it is racist and disrespectful of multiculturalism.
Those four correlate highly.
One of these things is not like the other.
Wow.
Yeah, that's not peahacky at all.
That's just totally reasonable grouping there.
And that is called punitive justice.
But what's interesting is there's also times where she's divided it out into like sexual impropriety categories, which kind of made more sense to me.
And so you got to be pretty certain about what you're testing, right?
Like when you're asking it, when you're forming a question, you don't want it to be that someone could answer one way for like seven different reasons, right?
Yeah.
Yeah.
I would say every once in a while, I like to leave something ambiguous because I'm trying to think of a good example where I might want a participant to, I don't want to tell them what I think the question means, I want to leave it up to them to, you tell me what it means, you know?
I could imagine a scenario where that's true, I just, I don't think that happens at all in this study where I would want that to be the case.
In this study, you really want them to know what you meant by it?
Yeah.
Well, and especially not including attitudes and behaviors in the same sentence.
So then, like, you don't know, right?
It's the compounding information within an item.
And even if you had an ambiguous question that was worthwhile, you still wouldn't want to have compounded because then you can't interpret it at all.
You don't know what they're responding to, which part of the question.
Right, exactly.
Like, I mean, I looked at these questions myself and I was like, well, some of them are like a classic loaded question.
Like, I know what you're doing here, you know what I mean?
And like, it's a really common issue in social psych that You try really hard to stop participants from knowing what you're up to because they behave differently when they know what you're up to and what they know what you're looking for and this is like a Worst-case scenario for that of participants are gonna try to behave in a certain way It's better because they did an online survey.
You're not face-to-face They're not worried about judgment that helps a little but people definitely knew what you were doing.
They knew you were asking them about Oh my gosh, I just realized it's even worse than what I said.
were going to react a certain way.
Like, I'm not shocked about the high correlations they got because people were probably doing that.
You know what I mean?
Oh my gosh.
I just realized it's even worse than what I said.
I forgot that the very division and the four factor solution you're talking about was the time where the first factor was sexual propriety.
And then all of your punitive justice questions, one of which is just randomly about Islam.
I don't even understand the connection.
But the other three are all about sexual propriety.
You can't have punitive justice be the category.
And then you ask, by the way, they aren't even about punishment.
They're all about due process.
Yeah.
Which is, I think, a different thing.
And it might be people's misunderstanding.
You know, like I could see people easily misunderstanding something to say, well, yeah, you should have to prove you're innocent.
And that I think that what they're doing is that's a big gotcha.
Because actually, it's the the burden should be on the other person to prove that you're guilty.
And it's like, yeah, okay, but like, what context is this in?
Are people just talking about, like, casually?
Do they know if you're talking about a court of law?
And also, again, this is all, those are three due process questions related to someone accused of sexual impropriety.
Why is that punitive justice?
None of those talk about punishments at all.
They just talk about what you should do while someone is accused of something, which is, I mean, again, maybe it's nitpicking there, but like that's not at all.
And you would want to ask, I would think, a range of questions about different topics.
And you could even ask some that have nothing to do.
Oh, this could, would this be a good idea?
Ask some that have nothing to do with PC, like ask them about punishment, just punishment in general.
I think also where you're saying like, yeah, maybe this is nitpicking, but that's the whole point.
I mean, it is about semantics when you're talking about naming a factor, and you do have to be mindful of that.
And so, yeah, it's nitpicking, but that's the whole purpose, so.
Let me bring you to, it's a piece that's so easy to overlook here.
So the first one, when a charge of sexual assault is brought forth, the alleged perpetrator should have to prove his or her innocence.
It's rate your level of agreement, and we've got a positive correlation here, so agreeing with that is considered you're a punitive person.
Rate your level of agreement.
If a student is suspected of sexual assault, he or she should be suspended pending further investigation.
Slightly lower correlation, but a positive.
So agreeing with that, yes, you should suspend them.
So punitive.
The next one, an allegation of sexual assault is to be rejected if there is reasonable doubt about the guilt of the alleged perpetrator Positive correlation again.
Agreeing with that.
I didn't think about that.
Agreeing with that is, you are for punitive justice?
That doesn't make any sense.
Wow.
No, and I thought, maybe are they just absolute valuing all of these to positive, but then all the egalitarian ones are negative numbers.
What a mess.
That exact thing I didn't notice, but there are other places I noticed that where I thought, again, I was like, am I an idiot?
Am I dumb?
Am I just not understanding this?
But no, like there's so many places where you're like, I'm sorry, you're saying that's a positive correlation?
Yeah, that you're exactly right.
I mean, an allegation is to be rejected if there's reasonable doubt about, I can't even form an ideological, I was about to say, well, oh, those are all the conservative answers or the, but no, that's not even true.
Like, The yes answer to the perpetrator should have to prove his or her innocence.
You would think that all that's a crazy lefty.
You just want if anyone's accused of sexual assault, you want them instantly banned.
OK, cool.
So you're saying hypothetically, you could say that's the like hardcore lefty, quote unquote, PC answer.
But then if the other one, like you said, is an allegation of sexual assault is to be rejected if there's reasonable doubt, that's the opposite of the super lefty answer.
You would think, you know, you'd want that correlation to go inverse to your point.
So like what?
How can they?
Yeah.
Politically, it doesn't make sense.
Punishment-wise, it doesn't make sense.
Again, none of these are about punishment.
I'll point out another thing for the second question.
If a student is suspected of sexual assault, he or she should be suspended pending further investigation.
I could hold the opinion that, oh yeah, because you're in a school, so sure, you could suspend him.
But I could also hold the opinion that if he was found guilty, we should barely punish him.
Like I can hold those two opinions like that.
So you're not really testing on like punitive justice.
You know, I could hold the opinion that I'm all for like restorative justice and I would want him to work with the person, but I could still believe that he should be suspended while the investigation is pending.
Those are two different things.
Yeah, the one that was really telling to me is what they call, what is the name they've got here, egalitarian beliefs and something, something?
Oh yeah, a fourth factor, egalitarian beliefs and policy.
Yeah, those are all the negative ones for some reason.
Egalitarian beliefs and policy.
Well, it's because if you agree with the following statement, you're not egalitarian.
It's disagreement they're looking for.
Racial, ethnic, and gender quotas should exist in employment.
Racial, ethnic, and gender quotas should exist in education.
Wait, yeah, what?
Government interventions should exist to ensure equality in professions where there is historically large inequality.
And in academia and corporations, ethnic, racial, and gender diversity are required to have a diverse range of ideas.
They would like you to disagree with those statements in order to be considered egalitarian.
What?
Oh, now I'm even more confused.
Yeah.
That's the only reason I can think of that that's negative.
I don't know why some of these numbers are negative and positive.
Right.
Well, now I guess my only worry is now does that affect how she's categorized the PC egalitarians?
Because that makes even less sense because those are all conservative.
That's the opposite of what she's saying, obviously.
So in their mind, the egalitarian thing is like, no, you got to be equal in your opportunity.
It's like she's doing the the only way to stop discrimination on the basis of race is to stop discrimination on the basis of race.
She's doing that bullshit and saying that that's more egalitarian.
But then elsewhere in the study, I'm pretty sure that's not how she's talking about PC egalitarians, especially, by the way, if she's saying a women's studies professor is a PC egalitarian.
So that this just makes no sense.
I know I was trying to I was like, oh, it's got to be like a reverse scoring thing, like blah, blah, blah.
But then I was like, no, because if they're saying the first one on the censorship, rate your level of agreement, the words of classic nursery rhymes should be changed to more inclusive lyrics.
And that's positively scored, so agreeing with it means you're for censorship.
And this is, rate your level of agreement, racial, ethnic, and gender quotas.
It's negatively scored for agreeing with it, so they think it's not egalitarian to believe that.
Despite my best efforts, we've gone into how stupid these questions are.
That is a separate thing.
No, so we'll put a pin in that.
There's so much more to talk about with these stupid questions, but talk about the stats weirdness that happened.
Yes.
Okay.
So there's a couple of different things that happened.
First, we start with our like 203 questions and I guess that's not like Ridiculous on its face.
Except there's not 203 questions because then if I go to the other table, there's either 184 or 191, depending on how you count it, in the Appendix A.
And on Table 1, there's 200, but it's missing the uncategorized questions of which there are seven, which would be 206.
So I don't know!
I don't know how many questions there are, and I don't know where they are, and I don't know what happened to them.
But whatever, there's some questions.
And the categories... Okay, so these are categories they made.
They looked at the questions, they thought about how they were going to write the instructions, This is not done by statistics at all, which would be fine if this is just for organizational purposes.
They made attitudes, language, sensitivity to offense.
One thing we have to be careful about is they're going to count in the question numbers.
They have two like sort of subscales of that.
Sensitivity to sexist insults.
Sensitivity to right-wing name insults.
There's only two questions about it.
System injustice.
Censorship.
Punitive measures, which sometimes gets called punishment.
Behavioral offense, which is somehow different from sensitivity to offense.
General PC, which is only four questions, and then there's uncategorized questions, which is seven questions, which is somehow different from general PC.
I don't know.
So those are the question categories they came up with.
But then when they go to run the factor analysis, what I would have done is just put the questions into the damn machine.
Just run an analysis.
You have 200 questions.
See what happens.
And it kind of feels like they might have done that because at one point they say they did a factor analysis and got like 10 factors out.
Yeah.
And then they did it again a slightly different way, and they also once again got ten factors out.
And nowhere in the 400 tables in this paper is there anything with ten factors as the result.
So then they said, we don't like that for reasons.
Yeah!
Like we didn't like, we got 10 factors and we did it again.
We said like, um, can you please try again?
And the model said it's still 10 factors.
We're going to ignore that though.
Yeah.
Yeah.
So I'm not quite sure what happened there.
So then what happened is they took just attitudes.
Which, again, was either 41 or 20 or 22 or 19 questions.
And they took 22 of them.
It says, 22 attitude subscale items from the original version of the PC scale.
I don't know what that means.
And they ran a factor analysis just on that.
And it's a forced four-factor solution.
They said, do not give me 10.
Do not give me whatever.
Give me four.
I would like four.
Now, does that change the math?
Like, that is a thing you can do, right?
But does that change the math, or does that just say, give me your top four factors?
Yeah, I guess top four is a reasonable way to think of it, yeah.
And it's not necessarily not okay.
Certainly, if you came in with an a priori, I have a hypothesis about this, that they're gonna be four.
Yeah, go for it.
If you just want to say 10 is unwieldy, and I just want to see, yeah, kind of like you're saying, top four, why not?
Go for it.
I don't see, why four?
Why not six?
Why not two?
I don't see the logic here, especially when we are going through those questions and showing how much overlap there appears to be here.
That's some of the beauty of a factor analysis.
If I didn't force the number, if I just hit go, And it grouped together some questions and grouped together these other ones, and you were like, wow, that's so interesting.
Yeah.
I wonder why the way people answer these sexual propriety questions goes together, and the way they answer these other ones that I would have thought are like sexual propriety, for some reason the math thinks they're not the same.
I would have thought that meant something, but when you force it to be four factors, I don't necessarily think it means anything, because you told it to be four.
If you had told it to be three, would it have put those two together?
Maybe.
Right, but it's giving you- now we can talk eigenvalues and other terms that I don't- it's not as though you can just make something have four factors if it really doesn't, right?
Because you're going to see that reflected somehow in terms of like the ratings, the statistical scoring of like how well is this actually working, right?
Yes, yes.
Okay, that's fair, yeah.
So if I try to force it to be four, it'll do it.
The computer will cooperate, but it will give me back numbers, results that show how poorly correlated these are.
So it is true, yeah.
They force the four, and the numbers don't look Terrible enough to say that this is completely unreasonable as if I, you know what I mean?
But did they already do some weird stuff to get to the four?
Because now are we back to the four we talked about before, which is sexual propriety, censorship, punitive justice, egalitarian beliefs and policy?
Yes, so those four.
Yeah, because now I am racking my brain to figure out, again, not just why was the Islam question categorized the way it was, but now I'm wondering like, wait, why did it even correlate with those other questions?
That does seem weird, but like if, okay, so this is where your expertise is really key.
When you look at that chart, because in one of the million tables, as you say, it gives us those scores as to, and precisely what they are is, you'll have to help me, but I think the first number is, This is how much this question correlated within this factor.
So like, that's how we went in order.
And then the communalities number tells you... I'm blanking.
What does that tell you?
Oh, this is the tough one.
It's like, just in general, how useful is it towards the The factors overall.
The extent to which this item loads onto any factor is useful to any factor at all.
Okay.
It's a part that also confuses me at times.
Yeah.
And I often ignore that line on the table.
I use it to X stuff out and I move on.
But it does, I believe, indicate something exactly like you're saying.
As you look down, the first three look like pretty similar numbers.
And then we have a pretty serious dip where, yeah, that Islam one is, I mean, the math is showing us it's correlating with those other ones, not nearly as well as the top three correlate with each other.
Yeah.
I will give us the dictionary definition to work with.
Communalities, it says it's the proportion of each variable's variance that can be explained by the factors.
So you would want a higher value because the more variance that's explained by the factor you're actually looking at, The more directly related to that factor it is, right?
Like that tells you that it's not being explained by something else.
I guess the challenge that I have is that I don't totally get how that's not redundant in a way, but there is some way that it's not redundant in some complicated stats way.
I just don't, I can't articulate what it is.
Yeah, so it's kind of like when you say like the variance is explained by, one way to think of it is any variance left that's still not explained.
If I added a fifth factor, what would still get explained?
A sixth factor, a seventh factor, you know, and eventually I'd have all the factors that explain all of human behavior and there'd be seven million of them.
But I think that's a way to think about it is For this item and the factors that I have picked out, how much of the variability between people, as some people answer it this way, see it?
I see it now.
So maybe to put it in terms that will help to other lay people, let me make sure I'm thinking of this right.
Let's say you have a question that everyone answers yes to.
That would correlate 1.0 with whatever factor you found.
But if It would have a commonality of like zero, because if everyone answers yes, then yes, absolutely.
Whatever factor you divided that study into, that question would be like, yeah, 100% with this factor.
But none of the variance would be explained by that factor alone because every factor would have it.
I mean, I'm using extreme examples, but is that right?
Maybe that's not right, but it's the amount that other factors also might contain that answer.
But the variance is explained just by the factor you're looking at.
Could that be it?
I think so.
I think for commonalities, you shouldn't think of the individual factors.
I think you should think of the whole model as a whole.
Well, then why is it answered by question, though?
Oh, yeah, yeah.
So, for each question, given that we have four factors in total, is the variability between people getting explained?
So, yeah, if everyone answered it exactly the same, like you're saying, yeah, the communality would be zero because the factors wouldn't help to explain it.
Yeah.
So I think that's exactly how to think of it.
The easiest, most straightforward way.
The communality tells me the extent to which that factor or any factor is helping to explain how somebody answers this question.
That factor or any factor.
So does that mean the communalities is a score of kind of the entire factor analysis in a way?
Like under this four factor analysis, this is how much this question is explained in this model, essentially?
I think so, because I'm pretty sure the extent to which it is explained by only this factor is what the other number is.
Yeah, okay, that's why I was thinking it was redundant, I guess.
And so, okay, yeah, I think I've made sense of it now, yeah.
Yeah, so the first number you'll see is the extent to which this factor helps explain the variability in this question, and the second number at the very end of the page is the communality.
How much does this model even existing explain the answer?
And if it was very low, that means this question doesn't really fit into this model.
And so sometimes they just get tossed.
They have nothing to do.
This model doesn't explain this.
So that might not necessarily be an indictment of the model.
It could be an indictment of that question.
Like if you ask a question that just kind of, okay, that's interesting.
Yeah, absolutely.
So like they tossed the vegetarianism question because they were like, it's just not doing anything.
So this is a good way to, again, name your factors.
Cause this tells you, oh, these are the scores that actually matter within the given factor.
And the other ones could have a correlation, but if, again, if everybody's answering them a certain way or they don't work in this model, they'll have a low communality score.
And then it would tell you, okay, even, even if it correlated most with factor three, it doesn't really mean that it's like core to that factor.
And it wouldn't be really something you would want to name that factor based on.
Right.
Yeah, yeah, I think so.
And I think this is also part of how... I don't know how much we want to get into the orthogonal versus... I don't know that... Yeah, if you actually made sense of that, sure, but I don't know if we... I personally never made sense of it enough to say anything about it that would make sense to people, but sure, you go if you did.
Yeah, all right.
Let me give you a quick blurb and we can decide whether we want to talk about it.
Normally, the way I'm used to talking about it, maybe other fields of psych do it differently.
I want to reserve space for the obleman people, whatever.
I normally do, if I was doing a factor analysis, it feels almost definitional to me that I would do it orthogonally, which means I would want my factors to not correlate with each other.
Right.
That items that belong in box A, not only do they all belong in box A because they match each other, but they don't belong in box B. Mm-hmm.
Because the box B items are different.
And so this is, for example, how the big five personality model was made.
I take all the adjectives out of the dictionary that look like they could relate to a person and I ask people to either, you know, describe a person and see which ones they put together or Tell me which ones go together.
And run a factor analysis and what you'll find is you get these five nice categories in every country.
It's really cool.
And so, for example, you've got extroversion, introversion, and you've got openness to experience.
And I've got a bunch of questions I can ask you that will assess extroversion.
I've got a bunch of questions I can ask you that assess openness.
Within those bins, the openness questions correlate with one another.
The extroversion questions correlate with one another.
But the extroversion questions don't really correlate with the openness questions, and I want them to not correlate with each other, because I want these to be independent, different things.
And that's why we would say these are five unique traits, and you can get a score from 0 to 100 on all five of the different traits on the Big Five.
That's the orthogonal side of it.
And had she done orthogonal on, you know, what they eventually come up with, which is the two-factor analysis, then we would have likely seen questions be grouped differently, because right now there is assumed to be relationship across both factors, right?
Yes, exactly.
So she's doing the version, um, they're calling it obleman, but I really liked the other word for it.
Oblique?
Yes, oblique, thank you.
I don't know why, for whatever reason, I think oblique shows up in the literature more often.
So you've got the orthogonal there where it's everything has to – like, definitionally, they have to be, to a certain degree, not correlated.
The oblique one, you're allowing them.
To be correlated to a certain extent.
Sometimes you might do that if you're like, I just really want to be able to pull out these two factors.
I know they're correlated, but I do think they are different things.
You could almost think of them like sub-factors of a larger factor, maybe, and that's why they're correlated with each other.
I can't make sense of what impact this has in real terms on the data.
Why would you do this other than you weren't getting good enough factor analysis going when you didn't do it?
Is there any reason you can think of why you would have wanted to do this?
In this study specifically or in general?
Yeah, in this study.
In this study, No.
I got nothing.
It just seems, again, lots of p-hacking, potentially.
Okay, we're going to call it there.
Believe it or not, that's about half of what we recorded with Janessa, and there's some even more complicated stuff.
If you want to hear more, and including, if you want to hear some more, it's like complicated advanced statistics stuff.
But again, Janessa does a great job of making it incredibly clear for us.
Yeah, really accessible.
Yeah.
Yeah.
And so if you'd like to learn even more smart reasons why Jordan Peterson sucks, hop on over to patreon.com slash where there's woke and get a bonus Janessa, you know, it's a, it's a whole, like another episode.
It was that good.
And seriously, a giant thank you to Janessa for taking the time.
Yeah.
Not only with this, but with other stuff.
She even looked at another study that I sent her where someone used, the study wasn't great, but they at least used like real methods, like actual methods to kind of categorize people as woke or PC or whatever.
And even though we kind of, you know, it wasn't like we agreed with the study, but it was like, oh, this is how you would actually do it.
And so that was really useful and interesting too.
So just wanted to say a huge thank you to Janessa.
She was amazing.
I hope to hear more of her on various stuff, including Serious Inquiries Only.
So if you'd like to hear more, check that out patreon.com slash where there's woke.
If not, we'll see you all next episode.
Thanks so much for listening.
Do you want to sign off?
Goodbye.
Goodbye now.
Bye bye.
I have so many browsers open, I can't find where Zancaster is.