All Episodes
July 31, 2017 - The Unexplained - Howard Hughes
01:02:12
Edition 306 - Cathy O'Neil

New York numbercruncher Cathy O'Neil on "Weapons of Maths Destruction" - how algorithms mayharm us...

| Copy link to current segment

Time Text
Across the UK, across continental North America, and around the world on the internet, by webcast and by podcast, my name is Howard Hughes, and this is The Return of the Unexplained.
Thank you very much for being there, for keeping in touch with me, letting me know what you think about the show and making suggestions about it.
I'm going to do some shout-outs on this edition.
Then we have somebody who I think is a very compelling and a great guest on this edition of the show.
Her name is Kathy O'Neill, and we're going to be talking about weapons of math destruction.
That's the title of her book.
She is a Numbers Whiz and a Highly Accomplished Mathematician.
She is very, very concerned, and I have to say so am I, about the way that data is gathered about us.
It is put into big number crunching machines by big organizations everywhere now, and they make decisions about us that are sometimes completely wrong, harmful, and damaging.
But this is the way that the world is going, and the question for all of us is, what are we going to do about this?
Is this march of technology something that we can alter?
Can we have our say?
Can we make them change it?
Or is it just going to keep happening?
I think it's a real issue, a very, very big issue for all of us.
And Kathy O'Neill is going to be addressing it on this edition of the show from her home in New York City.
Thank you very much to Adam, my webmaster at Creative Hotspot in Liverpool for all his hard work on the show.
Thanks, Adam.
Couldn't do it without you.
Let's do those shout-outs then.
And let's start with Jason in Melbourne, Australia.
Says, I love your show.
And being an artist, I listen while I paint.
Has some thoughts about Michiu Kaku.
Says, while I admire his optimism, I have to take issue with a number of the things that he said.
He is certainly a futurist, says Jason, when it comes to technology, but sadly less progressive when it comes to social welfare.
And I was disturbed that he decried the idea of a universal basic income and instead favoured tax cuts for tech companies.
I don't think that's precisely what he said, but I understand where you're coming from, Jason.
And look, if you want my two cents worth, I think we have to think about these things.
You know, a lot of us are having to live on less and less money.
Populations are getting bigger and bigger.
People are living longer and longer.
What do we do?
As technology takes more and more jobs, even if we retrain and maybe do other things, who knows whether they will pay the same?
So we might be getting even poorer.
How are we going to live in the future?
So issues like the universal basic income, even if it's never adopted, need to be discussed.
And in some countries, it is being discussed, but not here in the UK and maybe not in the US, not so much.
Nicholas in Northampton wants to hear Stephen Greer on the show again.
He will be, Nicholas, thank you.
Michael in Detroit, thank you for the comments and for your donation, Michael.
Lawrence McNeill, all points noted, Lawrence, thank you.
Sharon in Canada, thank you for your suggestion and the kind email, Sharon.
Adam, good email, says, perhaps the reason why SETI has drawn a blank in the search for ET so far is that intelligent life might occupy a hidden valley of matter and dark matter.
And that's maybe why we're not finding it or seeing it.
Good point, Adam.
Food for thought.
Doug, I share your concerns about the cashless society.
Here's something else that I'm very, very worried about.
Whose benefit is that for?
Why are they doing this?
Is it really a great thing for all of us?
Who knows?
But I'm very, very skeptical about it.
I've got to say.
And Doug is too.
Steve, thank you for your stories.
Chris in Wakefield, Yorkshire.
Sue Blackmore.
Chris, you suggested she will be on the next edition.
Thank you.
John, thank you for your email.
Thomas asks, did UFOs play a role in the Bible?
What do you think about that?
Stuart, your comments noted.
Naomi in Melbourne says, I know many people would have enjoyed Michu Kaku.
I found what he had to say mainstream and lacking in any real knowledge when it came to the subject of alien abduction.
I take your point, Naomi.
I have to say, I've got a lot of respect for Michu Kaku.
He is a great scientist and always a very compelling guest with views on everything.
And I've got a lot of good mail about him, but I just wanted to highlight emails like Naomi's.
James says, been listening for a good number of years now.
Thanks for all the work you do.
Professor Kaku, very intelligent man.
I can't agree with him, though, about seeing the afterlife as a reduction of blood flow in the brain.
I take your point too, James.
I'm kind of hoping that there is more to this life than we see.
And when it's all done here, I'm kind of hoping that there's something better to go on to.
But who knows?
We're not going to be able to answer that yet, are we?
Simon, thank you for the link to the article about sensing the dead, Simon.
Nigel in Derbyshire, lots of good points and praise, but says, when I interview American people, did I realize that I take on an American accent?
I didn't actually, Nigel.
I'm going to listen to my past interviews, and thank you.
And you like it when I do my scouse accent, which of course comes from the city of my birth, and I'm very, very proud of it, Liverpool, you know.
And a lot of my relatives, you know, talk a little bit like that, you know.
And that's our accent.
And it's great.
You know, when I used to go on holiday when I was a kid, my father had a Liverpool accent and a better voice than me, but a Liverpool accent.
And he was so conscious of it.
And it's quite wrong that he was.
But he felt that he would be judged because of it.
So he used to make my mum and I, who didn't have quite so strong Liverpool accents, go and ask if there were any rooms available at guest houses because he thought if he went, they'd say no.
And it's a previous era, but the very thought that people would make a judgment based on how you speak is an awful one.
I think some of that still goes on.
Nigel, thank you.
Richard in Bolton says, enjoyed the Lucan program.
Have you done a programme about the D.B. Cooper case?
Yes, I have, but we can always revisit it.
Eli in Pennsylvania says Jim Carroll might be a good guest, one to check out.
Thanks, Eli.
And finally, Janine in Scottsdale suggesting Ruth and Friend.
Janine, do you have contact details for her?
I seem to remember that I tried to get in touch and wasn't able to.
But I'm certainly keen to do this.
So, you know, let's see if we can take it forward.
Thank you for your emails.
If you want to get in touch, it's theunexplained.tv.
That's the website.
Follow the link from there.
Tell me who you are, where you are, and how you use the show, and I would love to hear from you.
Let's get to the guest in New York City now, Kathy O'Neill.
And we're going to talk about algorithms and how maybe the algorithm's going to get you.
Kathy, thank you very much for coming on the show.
My pleasure.
Thanks for having me.
Now, Kathy, I've wanted to get you on this show.
In fact, when you were in London recently, I pursued you around London via your publishers to try and get you on, and unfortunately, we missed you.
But I'm really thrilled to have caught up with you in New York City on this Sunday.
We're pre-recording this a few hours before radio listeners will hear it.
My podcast listeners will also hear this around the world.
I wanted to get you on for a whole variety of reasons, one of which is I think the stuff that you're talking about is very much the elephant in the room.
It's not a phrase that I enjoy using that much these days because everybody uses it.
But, you know, this stuff, the material that you are discussing, I think is core to so much of human activity today.
And people tend to just not be aware of it or simply ignore it.
And I'm guessing that's why you embarked on the project that you've embarked.
Yeah, actually, that's a really nice way of saying it.
I mean, when I started thinking about this stuff about six years ago, I honestly didn't know anybody else who was thinking about it.
And nowadays, there's actually a kind of a growing community, and it's getting pretty big of people that are sort of actively worried about this stuff, but it's still not creeping into the actual place where it needs to be, like the big companies like Google and Facebook.
So there's a lot more to be done, that's for sure.
All right.
I want to do some biographical stuff about you, but before that, we'll just say that what we're going to be talking about are algorithms and how they are used.
We know this, used so widely by organizations of every kind, and how that is always not a great thing for us.
In fact, I did a little trailer for this show, and the phrase that I came up with is the algorithm's going to get you.
It certainly seems to be the case.
Well, I just want to be clear that I'm not anti-algorithm.
I actually am trying to push a kind of better, more thought-out, more ethical algorithm.
So I'm not trying to say let's stop using algorithms.
But I do think there's a growing group of algorithms, which I call weapons of math destruction, that really have dire and pernicious effects on society.
And they're not being kept track of.
So they could become the most powerful things, influences and forces in our society overall if we don't start thinking about it better.
And the problem is the thought process, I think.
We'll get into that.
Let's talk about you, just to introduce you to our listeners here in the UK.
You are a maths person.
Now, I have to say, I've always been rubbish at mathematics and theories and formulae and calculus and all those things.
So I have massive admiration for somebody who is because you were into numbers from what I read from when you were a little girl.
Yeah, I've always loved math, loved solving puzzles and solving the Rubik's Cube and things like that.
And I've particular prime numbers.
But I should mention also that I think the sort of fear and slash admiration that people have for mathematicians is actually part of the problem.
And it's not that I'm not grateful for admiration, but I think that it sometimes creeps up to the level of intimidation, right?
So when people are actually intimidated by math, then they don't know how to push back against mathematical algorithms.
Right, even in this day and age where everybody thinks they know everything, I think there is a big assumption that clever people know best, so best leave it to them.
Right.
And you'd think in the age of populism, we would start being a little bit more skeptical of that.
But somehow we haven't become more skeptical of this, what I call the authority of the inscrutable.
My goal is to get people to think a little bit more clearly about exactly what is going on in the process that used to be a human process, but has now been replaced with an algorithm.
If they used to be able to complain about the process, they should still be able to complain about the process, even if it's algorithmic.
And they've just got to change their thought process.
We will get into this.
Now, you look, you were into numbers.
Apparently, you were number crunching car number plates, registration plates when you were a kid.
So, you know, your parents must have known you were massively into all of this.
And you eventually became a professor at Harvard.
And then you went to work for a hedge fund, which I guess an awful lot of your colleagues and compatriots did, using your mathematical skills practically, yeah?
Just to be clear, I got a PhD at Harvard.
I was a professor at Barnard College.
But yes, I did, I used my mathematical training, or rather I should say like my, you know, when because I was actually a number theorist and I went into finance and that's really completely a different field.
So I wouldn't say I directly used my mathematical training, but I indirectly did use it because when you become a mathematician, you're essentially like a professional learner of logical systems.
So I learned this new logical system when I got there.
And definitely my overall training did help.
Yes.
You became something, and I've never heard this word before.
I don't know where I've been, but you became a quant.
I'm assuming that's what quantifier?
Quantitative analysts.
Yeah, that's the, you know, it used to be that traders, you know, the sort of ex-jock traders used to control the Wall Street trading system, but now it's all about the quants.
The quants build the algorithms and then the engineers build the actual trading systems and the traders are gone.
So the key to the world is yours.
And look, in this country, I don't know if it is in your country, but to a lot of people in this country, the words hedge fund have become a little dirty.
People do not like those words now.
But you went to work for a hedge fund.
I did, yes.
I started in early 2007, before everyone hated hedge funds and before I knew to hate hedge funds.
I honestly didn't know what I was getting into.
But then when I got there, like those, the world started falling apart around us.
And, you know, I learned the hard way, what we did at the hedge fund.
Okay, well, look, I was working in news around 2008, and I have a good friend here who's very well known in the UK called Justin Urquhart Stewart.
Now, Justin is a big guy in finance.
He used to work for Barclays, now has his own company, Seven Investment Management.
And, you know, Justin is very well known in the city of London.
And he used to do finance pieces with me on the radio.
For years and years and years, he'd get up at the crack of dawn to do them.
And around about 07, I think it was, maybe 08, you know, it's 10 years ago now, feels like a long time, probably is.
I remember him saying to me one morning, he said, Howard, subprime loans, it's the next big story.
Watch out for it.
And I thought, what's he talking about?
And then I came to realize what this was all about because it brought the economy down.
For those of us who think we know the detail because we've worked on news desks and we think we're clever.
We're probably not.
What is a hedge fund and how did they get into trouble?
Well, I'll tell you what a hedge fund is, but the truth is they didn't really get into trouble.
A hedge fund is a management fund that only rich people or large funds like retirement funds are even allowed to invest in.
And what they're supposed to do is they're supposed to invest money, at least the kind of hedge fund I worked at.
They're supposed to invest money that makes money independently, statistically independently from the market.
So the idea is that you could invest a lot of money in the market and then you could hedge that bet by investing in a hedge fund.
That's the basic idea.
But to be clear, the hedge funds in general didn't do that badly in the crisis because they really got out of it before it went bad.
Or some of them actually made just tons of money because they bet against the mortgage-backed securities.
I'd say the people that really lost money were the investment banks that were building these securities in the first place.
So this getting out before the problem happened, it kind of reminds me.
I had an uncle Billy who lived in Clifton, New Jersey and had originally come from Liverpool.
He made his money on Wall Street and that was his life.
I remember him trying to explain to me the phenomenon of selling short and how, you know, in his world, that was a good thing.
That was getting out before prices came down.
Sure.
I mean, selling short, I think, is an underappreciated idea because a lot of the times, especially at moments of crisis, people sort of want to make it illegal to sell short.
But actually, selling short is essentially the same thing as betting against something.
And you would want people who have good information or are on top of business plans to be able to say, I don't think this is work or this will work or I think this is overvalued.
So selling short is a really important tool.
If we couldn't sell short, then I think we would have less actual knowledge of what's going on in business.
I don't think Leo DiCaprio with that movie that he did, The Wolf of Wall Street, did Wall Street and people who work in finance a lot of favors because a lot of people got the impression that it's like that.
And in some sectors of it, it is.
You know, if you don't mind me jumping ahead a little bit, because I eventually left finance altogether and occupied.
I wanted to get into that process.
Let's do that now because that's important.
The question is, you were working for a hedge fund.
Now, the people involved in that were making an awful lot of money, and they were doing it quite joyfully and quite legally and quite legitimately.
Of course, they were.
And you must have seen people who had fortunes.
Presumably, the opportunity for you to make one came along too.
Why did you fall out of love with that?
To be completely frank, I was never that into money myself.
I mean, that sounds like a really weird thing to say if I worked at a hedge fund, but it's actually true.
I never thought of myself as measured by my salary or my bonus.
And that was the culture of the hedge fund.
And I didn't really enjoy that.
I didn't want to be the person who felt like I was a bigger person because I had gotten a bigger bonus.
That's one thing.
The other thing is that I just simply thought my tools, my skills should be used to improve the system rather than to take advantage of the system.
And I think you're right that the people at my hedge fund anyway, I don't think they did anything illegal.
But the nature of a hedge fund is to try to take advantage of opportunities.
And at that moment in 2008 now, 2009, I looked around and I said, this financial system needs mending.
And maybe I could go into risk and improve the risk system, like improve the actual way we understand risk as a financial system.
And so I wanted to do that because I thought that would be more useful.
And at this time, this was a crucial time for all of us, both sides of the Atlantic.
This was the era of the subprime loan.
And from what I understood of subprime loans, as it was explained to me, is that a bunch of very marginal debt, loans made to people who may not have the ability to repay it, all of this stuff was like garbage swept up into piles and then sold on again.
Yeah, I mean, and the piles themselves were called securities.
When you put a pile, a pool, it's called, of mortgages together, you sort of measure the risk represented by those mortgages.
And then you, if you, actually, I shouldn't say you do this because actually credit rating agencies do this, and they're paid to decide how risky that overall pool is.
And they don't do it for the whole pool.
They usually divide the pool into what's called tranches, as people have probably heard of.
But anyway, the reason I'm mentioning this is because one of the things that really disillusioned me about working in finance, especially during the crisis, is that the people who are in charge of sort of using mathematical rigor and sophistication to understand the risk represented by these mortgages just simply didn't do their job.
They actually, I consider it a mathematical lie and a weaponized one at that, because once they put these AAA ratings on those mortgage-backed security pools, the other people in the world who didn't understand that this was a lie believed it and they bought this stuff.
And by the way, it goes back to the point you were making before of admiring mathematicians and sort of trusting them.
So that mathematical trust was brandished just for greed, for the sake of making money by the credit rating agencies.
And it brings us back to the point that you made so forcibly at the beginning of this, that when I said the algorithm is going to get you, it's not the numbers themselves or the number crunching that is the problem.
It is the interpretation that is put on those numbers by people.
Yes.
Although I should say when you have a mathematical lie like AAA ratings, which is supposed to signify there's very little risk of default, then you can't really blame people for interpreting them as stated.
But somebody had to make that rating in the first place.
Exactly.
And also, I mean, one of the ironies of my battle against terrible algorithms, which extends far off from finance, but it starts in finance, is that we call this as the field as a whole, we call it data science.
But my point is, I try to make the point, that it's not really scientific.
What we do is we have these insiders who know much more than us, who have much more, who are privy to much more information.
They make these decisions for us, and then we certainly just blindly trust them.
We don't actually ask them even for evidence, nor do we collect our own evidence that what they're saying is true.
I feel like the field of data science would be improved greatly if we actually made it scientific.
Here's a question I've heard asked a million times, and I've heard a million different explanations.
This crash of 2008 that affected each and every one of us in so many ways, how did it happen?
I mean, from your point of view, where you were sitting, how did it unfold?
I guess is the best way of asking.
It was an inevitable.
And when people say it was an unforeseeable event, that just means that they personally don't want to think about how it actually happened.
What actually happened was, and I'm not saying it was predictable up to the moment it happened or exactly how it happened, but it had to happen because what we had was an inflated housing market and just tons and tons and tons of mortgages that were going bad that were sold in these bundles that we talked about.
But on top of that, we had enormous amounts of other, what are called derivatives, so bets, contracts that make bets on the underlying mortgages that were doubled or triple layered.
So there were bets on bets on bets.
And they were all sort of pointing at these same terrible mortgages, the underlying subprime mortgages.
So the bubble got to be such a massive scale and it was relying on such corrupt underlying mortgages that we knew this was going to happen.
The question was when.
So it was just, for me, and of course, I didn't know all this at the time.
I shouldn't say that it was completely predictable for me, but people who were working within that industry certainly did.
And they knew the crisis was going to happen.
And they, for the most part, just tried to make as much money before it happened and then get out.
To those of us who don't work in finance, it sounds borderline criminal that somebody might know something like this was going to happen, but didn't do anything about it.
It is criminal.
They should have gone to jail.
But as we know, they didn't.
And we somehow managed to pick up the pieces, but we're still all of us collectively paying the price, aren't we?
Absolutely.
And I would add that even the movies that we see about it sort of seem to try to make the people who knew about it and made money off of it into heroes.
I don't understand it.
Is it partly a naive faith in models that people coming into the industry, and we will move off finance eventually, but people coming into the finance industry learned a kind of blind, childlike trust in the systems?
And so that, I mean, I'm trying to be forgiving here.
I'm trying to be generous towards these people.
But, you know, in many cases, they simply had so much trust in the system that they didn't believe it could all come down.
Well, I do think that as a, you know, as a quantitative hedge fund, you're trained to believe that patterns in historical data, in the historical markets, will repeat.
And that's what you bet on.
The idea that you would say, oh, this will change entirely doesn't seem even to be a relevant concept.
And that's partly because, so far, so good.
The patterns have been repeating.
But it's also partly because, well, if it's a totally new regime and things change entirely, then you're going to go out of business anyway.
So you might as well assume that's not true.
So there's this kind of logically extreme perspective that people take on when they start working in finance, and it just sort of permeates.
But the problem is, of course, is that it actually, this exact kind of wishful thinking, if you will, mindset creates the bubbles that cause crises as well.
So like, in other words, you know, there was some first person who packaged subprime mortgages and said, oh, the returns I'm getting on this are too good to be true.
I'll just keep doing this.
And we're going to see if it keeps working.
And then people walked in to that industry later and said, oh, it's been working in the past.
So it's going to continue to work.
So let's do it some more and see how much money we can make out of this.
And every person walks in thinking, oh, it's been established that this works.
So we'll make it continue to work.
But none of them sort of really understands that the overall system is becoming more and more bloated and corrupt.
Figures over here, I don't know what they're like in America, but show that an increasing number of people seem to be living almost beyond their means with a lot of debt on credit cards and in other ways.
Do you think that we're heading for another bubble that will burst?
Are you saying that that's a new thing for new players?
Well, it goes in waves.
And I just listened to the early finance programs here and the experts talking last week about the fact that people are racking up their credit card debt, but they don't have savings that are going to back them up if anything bad happens.
And this is what happened before the last crash.
Yeah.
I mean, I'll tell you what.
When I was working at the hedge fund, one of my best friends was in deep debt.
And she didn't have a steady job.
She was a housekeeper, essentially, in a babysitter.
But she was in more than $200,000 in debt.
It was just mind-blowing to me how she'd been extended that much debt.
And I think one of the reasons that I responded so differently than my colleagues did to the crisis was that I knew people who were really suffering under this crisis because suddenly she didn't have any ability to get another credit card or to pay down her credit card because everybody was trying to get her to pay up it all at once.
And if you're friends with somebody at that level of society and then you're also working with people who are extremely rich and are worrying about like tax laws changing slightly to their disadvantage, it really gives you a perspective.
God, it certainly does.
I mean, I'm learning as we speak.
Reading your book, which I find absolutely fascinating, and I think everybody should read this book, not just decision makers, politicians, people in corporations, I mean, everybody.
You say that this is the big data economy, and all of the difficulties that you've just been speaking about are connected to it.
And you say that it all makes the rich richer and the poor poorer.
Right.
So, you know, I just mentioned a friend of mine who was in credit card problems.
You know, the credit card companies, of course, have like very minute, granular models of all their customers, and they know how to essentially squeeze their customers for a maximum amount of money.
And that's what data scientists do.
And that's just credit cards.
There's also algorithms going on in the similar ways for insurance companies.
There's algorithms now that decide, you know, how long you should go to prison, whether you should go to college, whether you're going to succeed your freshman year of college, or whether they should just not admit you in the first place.
There's algorithms essentially in every place where you used to have human decision-making processes.
They're being replaced by algorithms because they're more consistent and because I think this is a slightly deeper philosophical point, but I think when an algorithm replaces a human judgment, then the humans involved are very happy because they don't have to take as much personal responsibility for their judgment calls because they can point to the algorithm and say, the algorithm told me this.
And so it depersonalizes and dehumanizes people as well, but it also makes it seem completely unappealable and completely correct.
So it's got a lot of weird characteristics.
And I should also say that like, I'm not saying that anything new is happening.
You know, there always have been companies that are trying to squeeze their customers, of course.
But the scalability and the efficiency of an algorithm makes it so much easier to do that.
Right.
So we have the big data economy.
And you say that the problem with these weapons of math destruction, the algorithms, is that they are designed to track huge numbers of people.
But I guess that's what the modern economy is built on, isn't it?
Yes.
And I'm not complaining about it per se.
If the algorithms were doing things fairly and transparently and for the good of the public, then that would be a good thing.
The problem is that the algorithms aren't being measured along most lines.
They're being measured essentially only along efficiency lines and cost savings lines.
So a business will introduce an algorithmic process if it saves them money, essentially.
But it won't really test whether it hurts people because that's not something they are particularly careful about.
Why and how would an algorithm hurt somebody?
Well, let me give you an example.
I interviewed Roland Beam.
His son, Kyle Beam, had been redlighted by a personality test when he tried to get a job.
Now, 70% of applicants in the United States have to go through personality tests even before they get an interview.
So it's absolutely ubiquitous to get a job in the U.S. And it's almost as bad in the U.K. So you all have to go through almost like the majority of people have to do these personality tests.
And they're secret algorithms.
You don't understand why you're answering questions.
They're hard to know how to answer.
You can't game them easily.
And you often don't even know if you failed or if you passed.
And Kyle was actually lucky to even know that he passed, that he failed, I should say.
The other thing that I should mention about Kyle is that he was a straight-A student.
He was in college.
Then he had to take some time off to get treated for bipolar disorder.
So he was in the hospital being treated for bipolar disorder before going back to college and trying to apply for this job, which was like a very simple job, bagging groceries in a grocery store.
So that's the job he applied to, and he failed the personality test.
So his father, who's a lawyer, Roland, I talked to, asked him what kind of questions were on that test.
And he said, well, a lot of them were like the questions that they gave me at the hospital for to, you know, as a mental health assessment.
And his father recognized that as being illegal under the Americans with Disability Act.
So it's illegal to force someone to take a health exam, including a mental health exam, as part of a job application process.
So the reason I'm telling you the story is that, first of all, 70% of, again, applicants have to do something like this.
Second of all, Roland asked Kyle to apply to six other jobs, which he did.
Six other jobs, they were all chain stores in the Atlanta, Georgia area.
So he applied to seven different jobs, and they all gave him the same test.
He failed them all, and he was essentially blocked from employment in the Atlanta, Georgia area.
So that's how scalable this is.
Terrible story.
And I guess that kind of stuff is happening all the time.
Absolutely.
The difference is that we mostly don't hear about it.
So when you have bad algorithms that are scaled to this proportion, massive proportions, and they're flawed, but they're secret, how do you find out they're flawed?
You don't.
You essentially have sort of destruction laid on society, you know, on an individual, isolated basis.
And that's kind of what I learned when I went from finance, where there was a financial crash that everyone in the world stood up and noticed.
I went into data science where the same kind of flawed, weaponized lies that were called mathematical that we were supposed to trust, these algorithms, they were also flawed, but they weren't as public.
So the destruction that they were sowing all over the world wasn't something that people stood up and noticed.
Instead, it was happening silently.
That's why I said, I have to write this book.
This seems to me, but maybe I'm just old and maybe I'm just, I don't know, maybe I'm cynical about the world now.
I don't know.
Careworn by years of working in a news environment.
It seems to me, though, that what is missing and at the core of all of this is individual discretion.
Back in the day when I used to apply for jobs, there'd be some people sitting behind a desk and they would make a decision about you, whether they liked the cut of your jib or not would be a key factor.
If they got a feeling about you, if they thought you were good, there were no personality tests or messing around like that.
It was simply done on feeling.
And I happen to believe that we actually got better outcomes in those days.
But maybe I'm wrong.
Well, I mean, So two comments.
First of all, you know, probably black women would say you're wrong.
You know, like back in those days, it wasn't so great for everyone.
And so one of the promises of the big data algorithms is that we're going to get rid of those old discrimination issues.
And that's a good thing.
Of course.
And that's a good thing.
That's one of the promises.
The problem is that that promise hasn't been realized.
There's no reason to think that it happens naturally.
Algorithms don't naturally stop being sexist if they're trained on data that came from a sexist era.
They are very good at picking up patterns and repeating them.
So if you give them patterns coming from a historical data, even if the historical data is from last week, if your culture that created this data was a sexist culture, then the algorithm will become a sexist algorithm.
And that's one of my main concerns.
But the other thing I want to mention about your story is that is a class issue, which is that rich people still do get understood and assessed on an individual basis.
And that's what you see at private, like elite private schools or elite private colleges.
They get interviewed individually.
If you're applying to an elite job, you get interviewed individually.
So this really is an issue that, for the most part, is affecting poor people.
And poor people will be much more likely, first, to be unfairly discriminated against by an algorithm.
And secondly, not to have a lawyer.
Like, that's the other thing that Kyle was unusual about is that he had his father was a lawyer who could deal with this in a lawyerly way.
Most people who apply to minimum wage work, which the personality test mostly are attached to, don't have access to representation.
So this really becomes the overall field of big data as it stands, where we have efficiency, but we don't have fairness, really becomes a way of making lucky people luckier and unlucky people unluckier.
And it sort of creates this cycle, a feedback loop of modeling, which is a death spiral, really, for the people down below.
But if it's making the rich richer and those with power more powerful, then nothing's going to change it.
Well, the thing I can try to appeal to is a democratic government, if you still have one.
The concept of the public good, that the government should represent the public good.
And I would also add that a lot of the people who work within these walls of the places, the powerful places that build these algorithms, they don't intentionally want to make sexist, racist, discriminatory algorithms.
They actually think that they're doing a good job making fair, objective, unbiased algorithms.
So partly why I wrote this book was to educate people about what they're doing in their own job.
And I've gotten a lot of really good response from people working within the field of data science.
How can you create a racist algorithm without knowing it?
Well, one of the reasons is that you don't think about it very hard.
You just pick up data that's lying around and you figure, oh, this is real data, so I'm just going to be following the numbers.
But you don't realize that it is embedding everything that our society does and says and just implicit biases everywhere.
The second thing is that the way we build algorithms nowadays, especially with machine learning and deep learning and neural nets and all those things, means that we don't model, we don't like, you know, directly model how things work.
Rather, we let the algorithm itself learn how things work, and it's very opaque to the people that build it.
So in other words, we can measure inputs and measure outputs, and we can define accuracy, and we can see whether the thing is accurate, but it is a black box to the person building it.
So unless they actually go to the trouble of measuring whether it's sexist, whether it's racist, they can just go ahead and assume it isn't.
And that's what's been happening.
And is that the problem throughout, not only when we're assessing algorithms for whether they're discriminatory, but whether they are inaccurate?
The problem that people are simply saying, oh, well, this is a package of data.
It's just like the subprime loans, isn't it?
It's a package of loans.
It's a package of data, so it must be okay.
Open the gate, let it through.
Well, I think most of them consider themselves professionals.
They don't think of themselves as mathematical liars.
I think the credit rating agencies really were.
I think the people building the more modern algorithms, they do optimize to something, but that something is accuracy rather than fairness.
And those are really different things.
So I think they would scoff at the idea that they're just trusting the model without thinking because they are careful, but they're only careful in one dimension.
So Facebook is optimizing its newsfeed algorithm to time on Facebook or engagement or something like that, which, by the way, is highly correlated with profit.
So they're optimizing to profit.
They're not optimizing to truth.
We can all attest to that.
That's another choice they could have made.
Like, let's optimize to truth so that makes sure that people are presented with the maximum amount of objective truth.
Like, that's just simply not what they're doing.
And it would be a lot more difficult for them to do that, which is one reason that they're not doing it.
Well, if what you're saying is true, then that will favor fake news.
Absolutely.
It does.
And we've seen that.
My point is that when you optimize to A and you ignore B, then B could do whatever it wants.
And what are we to do with organizations like Facebook?
I mean, it's very hard to contact them or have any dealings with them to actually deal with a real person, you know, because the whole thing is so anonymized.
And I'm not sure how many people they employ, but trying to get in touch with them is not an easy thing from what I understand.
I'm not saying sanctions, but what requirements could we make of them?
Well, we have to build the concept of an accountable algorithm.
We don't have that.
There's no accountability whatsoever.
It's extremely influential, but no accountability.
And they love it that way, and they're using their army of lobbyists to keep it that way.
In the longer term, We have to build the concept of accountability and decide what it looks like for Facebook.
Facebook is a really hard one.
It's a really hard question.
I have a specific suggestion we can give to Facebook.
But for the sake of personality tests, going back to that one, which is a simpler algorithm, even though it's being used massively, it's still relatively simple.
We can ask them to prove that it's legal.
Give us evidence that this is not sexist, this is not racist, and that this does not discriminate against people with mental health status.
They don't have any proof of that.
They don't provide any evidence of that.
They just use it at a massive scale and we just assume it's working.
And we assume it's legal.
And that should stop.
But how do you find the people who've been victims?
That's the problem, isn't it?
I mean, over here we have these commercials where lawyers say, have you had an accident that wasn't your fault?
If you have, we can act for you.
No win, no fee.
Get in touch with this number now.
You know, short of doing that, how do you find people who've been victims?
You're right.
It's almost impossible.
And I know that firsthand because my editor, when I was writing this book, told me to put blood on every page.
And it's really difficult to do that when you're talking about victims of secret algorithms that they often don't even know they're being scored by.
It's a real problem.
And I am going like one step meta from that and saying, let's, in the future, at least, try to stop there from being so many victims.
So I'm not, I don't really know how to address the past victims.
But I do want to say that Roland Beam, the father of the boy who didn't, the young man who didn't get his personality, he didn't pass his personality test, he's filed a seven, I should say seven class action lawsuits against all the seven companies that were using that algorithm, that personality test, on the basis that the people were denied their ADA rights.
So their ADA rights were violated.
So, I mean, in other words, he's going the litigious route.
He's saying these companies should be held responsible for all the people that they should have hired, but they didn't because of this flawed algorithm.
But if, as you say, the companies will say we did this in good faith, and nobody seems to be acting in bad faith.
If they say we did this in good faith, where could you take the legal route?
Well, the point is that the ADA doesn't say if you use people to hire people, you can't discriminate against people with mental health.
No, it just says you cannot just, you're responsible for making sure that your hiring process does not discriminate against people.
So, you know, it's just, it is their responsibility, their legal responsibility.
Even if they, they're, I'm sure you're right that they're claiming a plausible deniability, that they're claiming, oh, we were using an algorithm.
We thought it was fair.
We thought it was legal.
But that's not actually their, it doesn't true up with their legal responsibilities.
Do governments use algorithms like this?
Could we find ourselves stumbling into a war because of algorithms?
Well, those are two questions.
I'll take the first one.
The first one is yes.
Algorithms are used by the justice system, by the policing systems, by the Department of Education to assess teachers, and they're all very flawed.
And I wrote about all three of those, those models, those families of models in my book.
As to whether we could go into a war, I would say we are already in a war if you consider the way cyber war is happening internationally.
People don't really think of it as a war because it seems like, oh, this website went down.
Oh, you know, not such a big deal.
But if you imagine that our electricity grid is hacked and we don't have electricity in the Northeast for five days, that would be an act of war.
And I can certainly imagine that happening.
You use a great phrase, I think, in the middle of the book where you describe the pernicious feedback loop, which makes the algorithm self-justifying.
Yeah.
And I think the most pernicious feedback loop probably comes from the world of sentencing, risk models.
So these risk models that are being built that are very much proxies for race and class.
So like basically, if you're a minority and if you're poor, your score would go up just because of that, just because of demographics.
And moreover, if your score is high enough, you are sentenced to prison for longer.
So the pernicious feedback loop looks like this.
You do commit some crime, but because of your demographics, you're sent to prison for longer.
It doesn't help anyone to be in prison longer.
And once you get out, you have even fewer resources, even fewer opportunities to get a job and live a normal life.
And you end up back in prison.
So the high risk scores actually created a situation where you're more likely to end up back in prison.
That's a pernicious feedback loop.
God.
And that presumably is happening all over the US and the UK right now.
I don't actually know how much the UK uses recidivism risk algorithms, but I know it's absolutely being used all over the United States.
And then, of course, real human beings, police, judiciary, etc., having used these systems, if something bad happens, a person ends up boomeranging back into prison, they can just hold their hands up and say, well, we had no idea this could happen.
Oh, I think worse.
They would think they would say, oh, we were right.
Right.
We were right.
Like, they were criminal and no good.
Like, that's what they would say, because they were already prisoner and they're already scored high risk.
What about healthcare?
You use a phrase wellness, the index of wellness.
Yeah, I mean, I don't think that's a completely full-fledged weapon of mass destruction yet, because really what it is, is a surveillance method where you have a lot of employers in this country are in charge of insurance.
So there's this massive goal for, especially for larger companies, but also for small companies, to try to minimize their insurance costs from their employers, employees, excuse me.
And one of the ways they do this is they institute what's called wellness programs.
And the idea of what it's aptly named, you know it's Orwellian when it's named wellness because really what they claim to be trying to help you stay healthy.
But what they really do is charge you more if you aren't healthy.
And they consider that an incentive.
But the other thing that's going on at the same time, so that's not secret and complicated enough to warrant being called an algorithm.
it's just terrible.
It's a cost shifting onto sicker people.
But the other thing they're doing, which is going to long-term be very problematic in my opinion, is that they're collecting information about people.
So when you're in a wellness program, unless you want to pay an extra $50 a month, you have to sort of wear quantitative self-type wearable devices and then go to get like full checkups, which measure all sorts of things that you would not normally measure.
Basically, they're keeping track of you.
They're keeping track of your body.
And they're hoping to do it in a very long-term way so that they can see what kind of employees are long-term expensive and what kind of employees are long-term not as expensive.
And my theory, my fear is that what they're going to do with that information is in the future, only hire people that they recognize as people who are long-term less expensive.
And that's, again, going to be a tax on poor people because poor people, as we know, are less healthy.
This is terrible.
And, you know, these algorithms, these programs could be wrong.
You know, if you're on the wellness program, you've got a monitor fitted to you, and some parameter that you exceed flags up with a red light saying this may be a problem, then that's going to get you marked down and you may not have a problem.
You may never have a problem.
Listen, I mean, it's probably the worst possible thing of all, you know, and it goes to it also is really interesting sort of philosophically and ethically.
It goes to the question of like, what do we have power over as individuals and what do we not have power over?
So some people think it's perfectly fine to charge smokers more for insurance, for example, because they think smoking is a choice, which it might be for some people since some people quit, right?
But some people can't quit.
So maybe that's not really a choice for all people.
And then the question of obesity, you know, is it a choice to be overweight?
Because overweight people tend to get diabetes, which tends to be pretty expensive on an insurance.
And then the question becomes, is it a choice to be overweight?
And absolutely, some of the wellness programs assume it is because they charge you more if you haven't lost weight.
But then it becomes, if you look at it from a systematic way, like from a sociological perspective, what you're doing there is you're basically charging African-American women a lot more for insurance because they statistically have more obesity, higher obesity rates.
And it becomes a real quandary, I would say.
And by the way, I want to be clear that although I do have my own ethical perspectives, I am not trying to foist my ethics onto all algorithms.
Really, what I'm trying to do is try to convince people that these algorithms embed our ethics as society.
And we need to be careful about which ethics we agree to and what is going on against our will or behind our backs.
And people are people.
We're all individuals.
I mean, look, I am slightly overweight, okay?
Not worryingly so, not radically so, not appallingly so, but I know I am.
And if I want to, I'll deal with it.
But I don't have diabetes.
I have no sign of having diabetes.
So I would feel rather resentful of any program that marked me down for anything on the basis that, oh, you're a bit overweight, so you might get diabetes.
So we're going to charge you more for your insurance or whatever.
And by the way, it's never that transparent, but that is exactly what happens.
Car insurance, there's another thing.
I mean, in this country, we've had insurance tax rises recently, which have increased insurance.
But I and a number of friends have had radical increases in car insurance over the last year or so.
And I'm just wondering as systems for assessing car insurance risk get more sophisticated, or the companies think they're sophisticated, we might think they're wrong, we're going to have more and more of these problems.
And because somebody at the end of the day is making money out of all of this, nothing's going to change.
Listen, I want to take this conversation up one level, which is I think that big data is actually incompatible with insurance.
And let me tell you what I mean by that.
Insurance, if you think about it abstractly, is meant to pool risk.
You know, it's that any individual without insurance, when they hit an emergency, they can't afford the costs.
So what they do is they join with a bunch of people and pay a sort of affordable monthly payment into a big pile.
And then when somebody has an emergency, they can take it from that pile.
And that is a sort of cultural understanding of insurance.
And that is the way it's called pooled risk.
Big data is exactly the opposite.
What big data does and what big data is very good at and is getting increasingly good at and will get better and better over time.
So this is going to be a worse problem in 50 years, is segregating people by risk.
In other words, putting people into different categories depending on how risky they are for exactly what kind of problem.
So it's the same problem for car insurance as it is for healthcare.
Like as we get better at understanding people's risks, are we going to use that to help them, to give them better, healthier lives?
Are we going to tell the doctors that they're more risky, they're at more risk for this particular disease and we're going to help them?
Or are we going to use that for an insurance company?
We're going to give that information to an insurance company that can charge them way more to the point where they can't afford insurance anymore, which actually has now defeated the concept of insurance altogether.
Same thing with car insurance.
So in fact, it's the same thing as with all insurance.
If insurers know exactly what people, how much risk a given person poses, then and if they charge them that amount, then they're just essentially making them pay in advance for the emergency that they're going to have.
Or may not have.
Or may not have.
That's the point you keep making, which is right.
If you're 60% likely to die of a heart attack in the next five years, that doesn't mean you're going to die of a heart attack in the next five years, right?
Or if you're 60% likely to get diabetes, you don't necessarily get diabetes, but you're going to be charged more for the cost of your diabetes care.
But as everything, insurance and everything, becomes more consolidated and companies get swallowed up by other companies, everything gets bigger.
Doesn't it mean that the world is becoming more driven by profit and is becoming more risk averse for that reason?
I mean, those two things are seemingly connected, it seems to me, the risk aversion and the desire to get more and more money?
Well, I think it's fair to say that we have lost the concept of a safety net.
You know, the insurance is a kind of safety net.
And as we become more and more big data-centered in that kind of field, we're losing it.
It's no longer a safety net.
So the question as a society that we have to answer is, are we going to replace that lost safety net with a different kind of safety net?
Or are we just going to let people die because they can't afford their medical treatment?
Or are we going to let people not have a car because they can't afford their car insurance, which means that they can't have a job, which means that they can't afford anything, and which means they'll die next time they get sick.
So it all becomes the same question.
It becomes the question of like, how are we going to treat people that are unlucky according to the big data?
And I've just seen a little mental image of a bunch of dominoes collapsing over.
You know, somebody cannot get a job and you look down the scale of things they've been assessed for.
You know, once you're at the bottom of the stack, you are really at the bottom of the stack.
Yes, I think the other thing that I want to say, the other thing that big data is incompatible with is upward mobility.
Big data sees where you are and keeps you there.
Well, I don't like that concept of big data at all, I have to say.
But then, like I say, yes, I have been on news desks all my life and I am quite old-fashioned.
You say that predictive algorithms are silently shaping and controlling our destinies.
That does seem to be at the core of everything you said.
Yes.
We can do better, though.
I just want to say, again, I'm not saying let's stop using all algorithms.
That's not feasible.
But I am saying let's start making those algorithms accountable to us.
When you say we need a regulatory system for weapons of math destruction, we need to incorporate non-numerical values.
Yes, we do.
Yes.
We need to decide on the ethical frameworks that we can live with, and we need to make sure that that's what the algorithms are following.
But if everything's working and companies are making money and there hasn't been insurrection in society, there's no incentive to do any of this.
Well, we need a little more insurrection, probably.
We're all distracted right now by the Trump situation, but I think in the next 10, 15 years, we're going to see movement.
And by that, I mean we're going to see a real concept of accountability.
We're going to see regulators getting people and getting businesses in trouble and suing them for doing something that is illegal, that is algorithmically done, but it's illegal.
I think we're going to shape up.
I don't think we have a choice, actually.
There was a TV program that you may not have seen.
It was a British program that was huge in America and constantly gets repeated on TV, even though it was made 50 years ago this year.
Called The Prisoner, starring Patrick McGoon.
And it starts with him saying, I will not be, what is it, indexed, stamped, filed, or numbered.
I am not a number.
I am a free man.
That seems to be very much part of what you're talking about.
It seems to me that the future that he foresaw 50 years ago is here.
Yeah, I feel like we're living in a science fiction dystopia.
And I just needed to write the book to alert people to that fact.
Okay.
Have you had any contact with people who make laws in the U.S., for example?
Have you tried to interest them in what you've been saying?
Sure.
I've talked to a bunch of people who, you know, were and there's definitely interest.
There's more and more money coming into this.
I haven't yet talked to a data scientist at Google about it, though.
And also, it's partly down to the education system, isn't it?
People who are coming up in these various fields, young people need to be taught that numbers are not gods.
They have to be used like everything with respect.
If you fill a car full of petrol, you've got to be a little bit careful.
If you use data, it seems to me you've got to be a little bit careful in the same way, but people haven't been.
Yeah, that's one of the reasons I always talk to young people for free when I'm invited, if I can do it.
But I'm also happy to say that quite a few classes across the country have started using my book as a textbook.
And if you were to try to persuade teachers in schools to use this book, or at least interest pupils in it, what would you say?
What's the single most compelling argument?
Well, I think young people especially are going to be living in an algorithmic world and they need to know what tools they have and what tools they can develop to fight back and to hold them accountable.
It's a political fight.
It's not a technical fight.
I'm not talking to mathematicians.
I'm talking to everyone.
Big task you've got.
I hope you join me.
Well, no, you know, I was halfway there before we even spoke.
I have to say, Kathy, but I think the problem is, and again, this is going to make me sound like an old reactionary, is the problem is that people are more accepting than I've ever known them in my life.
They are less questioning.
Now, all right, there will be people who say, Howard, you're completely wrong.
Look what's happened politically in this country where we've got Brexit and all the rest of it.
I'm not talking about that.
I'm talking about the small change of daily life, not big political decisions, which are swayed by politicians making arguments.
These are things that are happening behind the scenes.
And because people work very hard for not a lot of money in the main these days, and they watch a lot of reality TV and there simply is no time, you know, my fear is that these issues are just not going to get engaged.
Well, that's why I'm happy to be on this radio program.
I think you're right about the current situation, but I think we absolutely have to get this level of public engagement if it's going to really change.
I know that you did a pretty whirlwind tour of the UK, and I tried to catch up with you in London when you were here, but you were just so busy.
And by the time we connected through your publisher, you were heading back to the US.
But we've been able to do this now.
And was your trip to the UK successful?
Were people listening?
It was actually kind of amazing because when I came there first, when my first hardcover came out, it was much smaller than it is now.
So that's also a promising sign that people are more interested in now.
I think that people are starting to really be interested in, especially in the context of political propaganda ads on Facebook do those people who you worked with in the hedge fund do they know what you're doing now sure and what what are they i mean it's a big generalization but what do they think i think it's a mixed bag to be honest i think some of them some of them think i over exaggerate everything and that i'm i'm a bad pr for them but it's you know it is what it is
work it's changed people's lives i understand well good luck with it kathy please stay in touch are you going to be out of action work-wise for a while i guess you will be it's a laparoscopic surgery um so i'll be walking around on tuesday all right let me know how it goes kathy thanks for talking with me thanks a lot do you have a website that people can go to to read more about you well my personal website is mathbabe.org okay and my company's website is o'nealrisk.com all right kathy
Export Selection