Kevin Scott: Microsoft CTO | Lex Fridman Podcast #30
|
Time
Text
The following is a conversation with Kevin Scott, the CTO of Microsoft.
Before that, he was the Senior Vice President of Engineering and Operations at LinkedIn, and before that, he oversaw mobile ads engineering at Google.
He also has a podcast called Behind the Tech with Kevin Scott, which I'm a fan of.
This was a fun and wide-ranging conversation that covered many aspects of computing.
It happened over a month ago, before the announcement of Microsoft's investment in OpenAI that a few people have asked me about.
I'm sure there'll be one or two people in the future that'll talk with me about the impact of that investment.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, give it 5 stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F-R-I-D-M-A-N. And I'd like to give a special thank you to Tom and Nalante Bickhausen for their support of the podcast on Patreon.
Thanks Tom and Alanti.
Hope I didn't mess up your last name too bad.
Your support means a lot and inspires me to keep this series going.
And now, here's my conversation with Kevin Scott.
You've described yourself as a kid in a candy store at Microsoft because of all the interesting projects that are going on.
Can you try to do the impossible task and give a brief whirlwind view of all the spaces that Microsoft is working in, both research and product?
If you include research, it becomes even more difficult.
So... So I think broadly speaking, Microsoft's product portfolio includes everything from, you know, big cloud business, like a big set of SaaS services.
We have, you know, sort of the original or like some of what are among the original productivity software products that everybody uses.
We have an operating system business.
We have a hardware business where we make everything from We're good to go.
Really smart young economist, Glenn Weil, who my group works with a lot, who's doing this research on these things called radical markets.
He's written an entire technical book about this whole notion of radical markets.
So the research group sort of spans from that to human-computer interaction to artificial intelligence.
And we have GitHub, we have LinkedIn, we have A search, advertising, and news business, and probably a bunch of stuff that I'm embarrassingly not recounting in this list.
Gaming to Xbox and so on, right?
Yeah, gaming for sure. I was having a super fun conversation this morning with Phil Spencer.
So when I was in college, there was this game that LucasArts made called...
Day of the Tentacle that my friends and I played forever.
We're doing some interesting collaboration now with the folks who made Day of the Tentacle.
I was completely nerding out with Tim Schafer, the guy who wrote Day of the Tentacle this morning.
Just a complete fanboy.
It happens a lot.
Microsoft has been doing So much stuff at such breadth for such a long period of time that being CTO, most of the time my job is very, very serious.
And sometimes I get caught up in how amazing it is to be able to have the conversations that I have with the people I get to have them with.
Yeah, to reach back into the sentimental.
And what's the radical markets and the economics?
So, the idea with radical markets is, like, can you come up with new market-based mechanisms to, you know, I think we have this, we're having this debate right now, like, does capitalism work, like, free markets work?
Can... Mm-hmm.
Mm-hmm.
I think it's a reasonable set of questions to be asking.
One mode of thought there, if you have doubts that the markets are actually working, you can tip towards, okay, let's become more socialist and have central planning and governments or some other central organization is making a bunch of decisions about what Where work gets done and where the investments and where the outputs of those investments get distributed.
Glenn's notion is lean more into the market-based mechanism.
So for instance, this is one of the more radical ideas.
Suppose that...
mechanism for assets like real estate where you were, you could be bid out of
your position and in, in your home, you know, for instance.
So if somebody came along and said, I can find higher economic utility for this piece of real estate that you're running your business in, then you either have to bid to stay or the thing that's got the higher economic utility takes over the asset, and which would make it very difficult to have the same
sort of rent-seeking behaviors that you've got right now, because if you did speculative bidding, you would very
quickly lose a whole lot of money.
And so the prices of the assets would be very closely indexed to the value that they could produce.
And, like, because, like, you'd have this sort of real-time mechanism that would force you to sort of mark the value of the asset to the market, then it could be taxed appropriately.
Like, you couldn't sort of sit on this thing and say, oh, like, this house is only worth 10,000 bucks when, like, everything around it is worth 10 million.
That's really interesting. So it's an incentive structure that, where the prices match the value much better.
Yeah. And Glenn does a much better job than I do at selling, and I probably picked the world's worst example, and, and, and.
And it's intentionally provocative.
So this whole notion, I'm not sure whether I like this notion that we could have a set of market mechanisms where I could get bid out of my property.
But if you're thinking about something like Elizabeth Warren's wealth tax, for instance, like you would have, I mean, it'd be really interesting in like how you would actually set the price on the assets and like you might have to have a mechanism like that if you put a tax like that in place.
It's really interesting that that kind of research, at least tangentially, is touching Microsoft research.
Yeah. That you're really thinking broadly.
Maybe you can speak to This connects to AI. We have a candidate, Andrew Yang, who talks about artificial intelligence and the concern that people have about automation's impact on society.
Arguably, Microsoft is at the cutting edge of innovation in all these kinds of ways.
And so it's pushing AI forward.
How do you think about combining all our conversations together here with radical markets and socialism and innovation in AI that Microsoft is doing and then Andrew Yang's worry that that will result in job loss and so on?
How do you think about that? I think it's sort of one of the most important questions in technology, like maybe even in society right now, about how is AI going to develop over the course of the next several decades, and what's it gonna be used for, and what benefits will it produce, and what negative impacts will it produce, and who gets to steer this whole thing?
I'll say at the highest level, One of the real joys of getting to do what I do at Microsoft is Microsoft has this heritage as a platform company.
Bill has this thing that he said a bunch of years ago where the measure of a successful platform is that it produces far more economic value for the people who build on top of the platform than is created for the The platform owner or builder.
And I think we have to think about AI that way.
As a platform. It can't be a thing where there are a handful of companies sitting in a very small handful of cities geographically who are making all the decisions about what goes into the AI and then on top of all this infrastructure then build all of the commercially valuable uses for it.
So I think that's bad from a sort of You know, economics and sort of equitable distribution of value perspective, like, you know, sort of back to this whole notion of, you know, like, do the markets work?
But I think it's also bad from an innovation perspective, because, like, I have infinite amounts of faith in human beings that if you, you know, give folks powerful tools, they will go do Interesting things.
And it's more than just a few tens of thousands of people with the interesting tools.
It should be millions of people with the tools.
So it's sort of like, you know, you think about the steam engine in the late 18th century, like it was, you know, maybe the first large scale substitute for human labor that we've built like a machine.
And You know, in the beginning when these things are getting deployed, the folks who got most of the value from the steam engines were the folks who had capital so they could afford to build them and like they built factories around them and businesses and the experts who knew how to build and maintain them.
Access to that technology democratized over time.
Now, an engine is not a differentiated thing.
There isn't one engine company that builds all the engines, and all of the things that use engines are made by this company, and they get all the economics from all of that.
No, no, no. Fully democratized.
We're sitting here in this room, and even though they're probably things like the The MEMS gyroscope that are in both of our phones.
They're like little engines sort of everywhere.
They're just a component in how we build the modern world.
AI needs to get there.
Yeah, so that's a really powerful way to think.
If we think of AI as a platform versus a tool that Microsoft owns, It's a platform that enables creation on top of it.
That's a way to democratize it.
That's really interesting, actually.
And Microsoft, throughout its history, has been positioned well to do that.
And the tieback to this radical markets thing, My team has been working with Glenn on this, and Jaron Lanier, actually.
So Jaron is the sort of father of virtual reality.
He's one of the most interesting human beings on the planet, like a sweet, sweet guy.
Yeah. And so Jaren and Glenn and folks in my team have been working on this notion of data as labor, or like they call it, data dignity as well.
And so the idea is that if you, again, going back to this sort of industrial analogy, If you think about data as the raw material that is consumed by the machine of AI in order to do useful things, then we're not doing a really great job right now in having transparent marketplaces for valuing those data contributions.
And we all make them explicitly.
You go to LinkedIn, you set up your Profile on LinkedIn, that's an explicit contribution.
You know exactly the information that you're putting into the system, and you put it there because you have some nominal notion of what value you're going to get in return.
But it's only nominal. You don't know exactly what value you're getting in return.
Service is free. It's low amount of perceived...
And then you've got all this indirect contribution that you're making just by virtue of interacting with all of the technology that's in your...
And so, like, what Glenn and Jaren and this data dignity team are trying to do is, like, can we figure out a set of mechanisms that let us value those data contributions so that you could create an economy and, like, a set of controls and incentives that would Allow people to, maybe even in the limit, earn part of their living through the data that they're creating.
And you can sort of see it in explicit ways.
There are these companies like Scale.ai, and there are a whole bunch of them in China right now that are basically data labeling companies.
So you're doing supervised machine learning, you need lots and lots of label training data.
And those people who work for those companies are getting compensated for their data contributions into the system.
And so... That's easier to put a number on their contribution because they're explicitly labeling data.
But you're saying that we're all contributing data in different kinds of ways.
And it's fascinating to start to explicitly try to put a number on it.
Do you think that's possible?
I don't know. It's hard. It really is.
Because... We don't have as much transparency as I think we need in how the data is getting used.
And it's super complicated.
We, I think as technologists, sort of appreciate some of the subtlety there.
It's like the data gets created and then it's not valuable.
The data exhaust that you give off or the explicit data that I am putting into the system isn't super valuable atomically.
It's only valuable when you aggregate it together into large numbers.
It's true even for these folks who are getting compensated for labeling things.
For supervised machine learning now, you need Lots of labels to train, you know, a model that performs well.
And so, you know, I think that's one of the challenges.
It's like, how do you, you know, how do you sort of figure out, like, because this data is getting combined in so many ways, like, through these combinations, like how the value is flowing?
Yeah, that's fascinating.
Yeah, and it's fascinating that you're thinking about this, and I wasn't even going into this conversation expecting the breadth of research, really, that Microsoft broadly is thinking about.
You are thinking about it in Microsoft.
So, if we go back to 89, when Microsoft released Office, or 1990, when they released Windows 3.0.
In your view, I know you weren't there through this history, but how has the company changed in the 30 years since as you look at it now?
The good thing is it started off as a platform company.
It's still a platform company.
The parts of the business that are thriving and most successful are those that are building platforms.
The mission of the company now is...
The mission's changed.
It's changed in a very interesting way.
Back in 89, 90, they were still on the original mission, which was put a PC on every desk and in every home.
It was basically about Democratizing access to this new personal computing technology, which when Bill started the company, integrated circuit microprocessors were a brand new thing, and people were building homebrew computers from kits, like the way people build ham radios right now.
And I think this is sort of the interesting thing for folks who build platforms in general.
Bill saw the opportunity there and what personal computers could do.
And it was sort of a reach.
You just sort of imagine where things were when they started the company versus where things are now.
In success, when you've democratized a platform, it just sort of vanishes into the platform.
You don't pay attention to it anymore.
Operating systems aren't a thing anymore.
They're super important, completely critical.
When you see one fail, you sort of understand.
But it's not a thing where you're not waiting for...
You know, the next operating system thing in the same way that you were in 1995, right?
Like in 1995, like, you know, we had Rolling Stones on the stage with the Windows 95 rollout.
Like it was like the biggest thing in the world.
Everybody lined up for it the way that people used to line up for iPhone.
But like, you know, eventually This isn't necessarily a bad thing.
The success is that it becomes ubiquitous.
It's everywhere. Human beings, when their technology becomes ubiquitous, they just start taking it for granted.
So the mission now that Satya re-articulated five plus years ago now
when he took over as CEO of the company, our mission is to empower every individual
and every organization in the world to be more successful.
And so, again, that's a platform mission.
And the way that we do it now is different.
It's like we have a hyperscale cloud that people are building their applications on top of.
Like we have a bunch of AI infrastructure that people are building their AI applications on top of.
We have a productivity suite of software like Microsoft Dynamics,
which some people might not think is the sexiest thing in the world,
but it's like helping people figure out how to automate all of their business processes
and workflows and to help those businesses using it to grow and be more successful.
It's a much broader vision in a way now than it was back then.
It was a very particular thing.
Now, we live in this world where technology is so powerful and it's such a basic fact of life that it both exists and is going to get better and better over time or at least more and more powerful over time.
Yeah. So, like, you know, what you have to do as a platform player is just much bigger.
Right. There's so many directions in which you can transform.
You didn't mention mixed reality, too.
You know, that's probably early days, or it depends how you think of it.
But if we think on a scale of centuries, it's the early days of mixed reality.
Oh, for sure. And so, with HoloLens, Microsoft is doing some really interesting work there.
Do you touch that part of the effort?
What's the thinking? Do you think of mixed reality as a platform, too?
Oh, sure. When we look at what the platforms of the future could be, it's fairly obvious that AI is one.
You don't have to...
I mean, that's...
You know, you sort of say it to, like, someone and, you know, like, they get it.
But, like, we also think of the, like, mixed reality and quantum as, like, these two interesting, you know, potentially...
Quantum computing? Yeah.
Okay, so let's get crazy then.
So... So you're talking about some futuristic things here.
Well, the mixed reality Microsoft is really not even futuristic, it's here.
It is. It's incredible stuff. And look, it's having an impact right now.
One of the more interesting things that's happened with mixed reality over the past A couple of years that I didn't clearly see is that it's become the computing device for folks who...
For doing their work who haven't used any computing device at all to do their work before.
So technicians and service folks and people who are doing like machine maintenance on factory floors.
So like they... Because they're mobile and they're out in the world and they're working with their hands and servicing these very complicated things, they don't use their mobile phone and they don't carry a laptop with them.
They're not tethered to a desk.
And so mixed reality, where it's Getting traction right now where HoloLens is selling a lot of units is for these sorts of applications for these workers and it's become like, I mean like the people love it.
They're like, oh my god, like this is like for them, like the same sort of productivity boost that, you know, like an office worker had when they got their first personal computer.
Yeah, but you did mention it's clearly obvious AI as a platform, but can we dig into it a little bit?
Sure. How does AI begin to infuse some of the products in Microsoft?
So currently providing training of, for example, neural networks in the cloud or providing Pre-trained models or just even providing computing resources, whatever different inference that you want to do using neural networks.
How do you think of AI infusing as a platform that Microsoft can provide?
Yeah, I mean, I think it's super interesting.
It's like everywhere. And we run these review meetings now where it's me and Satya and members of Satya's leadership team and a cross-functional group of folks across the entire company who are working on Like either AI infrastructure or like have some substantial part of their of their product work using AI in some significant way.
Now the important thing to understand is like when you think about like how the AI is going to manifest and like An experience for something that's going to make it better.
I think you don't want the AI-ness to be the first order thing.
It's like whatever the product is and the thing that it's trying to help you do, the AI just sort of makes it better.
And this is a gross exaggeration, but I... People get super excited about where the AI is showing up in products, and I'm like, do you get that excited about where you're using a hash table in your code?
It's just a tool.
It's a very interesting programming tool, but it's an engineering tool.
And so it shows up everywhere.
So we've got dozens and dozens of features now in Office that are powered by fairly sophisticated machine learning.
Our search engine wouldn't work at all if you took the machine learning out of it.
Like increasingly, you know, things like content moderation on our Xbox and xCloud platform.
When you mean moderation, you mean like the recommender is like showing what you want to look at next?
No, no, no. It's like anti-bullying stuff.
So the usual social network stuff that you have to deal with.
Yeah, correct. But it's like really, it's targeted...
It's targeted towards a gaming audience so it's like a very particular type of thing where the line between playful banter and legitimate bullying is a subtle one and you have to It's sort of tough.
I'd love to, if we could dig into it, because you're also, you led the engineering efforts of LinkedIn.
And if we look at LinkedIn as a social network, and if we look at the Xbox gaming as the social components, the very different kinds of, I imagine, communication going on on the two platforms, right?
And the line in terms of bullying and so on is different on the two platforms.
So, how do you...
I mean, it's such a fascinating philosophical discussion of where that line is.
I don't think anyone knows the right answer.
Twitter folks are under fire now, Jack at Twitter, for trying to find that line.
Nobody knows what that line is, but how do you try to find the line for...
You know, trying to prevent abusive behavior and at the same time let people be playful and joke around and that kind of thing.
I think in a certain way, like, you know, if you have...
What I would call vertical social networks, it gets to be a little bit easier.
So if you have a clear notion of what your social network should be used for, or what you are designing a community around, then you don't have as many dimensions to your content safety problem as you do in a general purpose platform.
On LinkedIn, The whole social network is about connecting people with opportunity, whether it's helping them find a job or to find mentors or to help them find their next sales lead or to just allow them to broadcast their professional identity to their Their network of peers and collaborators and professional community.
In some ways, that's very, very broad, but in other ways, it's narrow.
You can build AI's machine learning systems that are Capable with those boundaries of making better automated decisions about what is inappropriate and offensive comment or dangerous comment or illegal content when you have some constraints.
Same thing with... Same thing with the gaming social networks.
It's about playing games, about having fun.
And the thing that you don't want to have happen on the platform is why bullying is such an important thing.
Bullying is not fun.
So you want to do everything in your power to encourage that not to happen.
But I think it's sort of a tough problem in general.
It's one where I think eventually we're going to have to have Some sort of clarification from our policymakers about what it is that we should be doing, where the lines are, because it's tough.
In democracy, you want some sort of democratic involvement.
People should have a say in where the lines are drawn.
You don't want a bunch of people making unilateral decisions.
And we're in a state right now for some of these platforms where you actually do have to make unilateral decisions, where the policymaking isn't going to happen fast enough in order to prevent very bad things from happening.
But we need the policymaking side of that to catch up, I think, as quickly as possible, because you want that whole process to be a democratic thing, not a...
Not some sort of weird thing where you've got a non-representative group of people making decisions that have national and global impact.
It's fascinating because the digital space is different than the physical space in which nations and governments were established.
And so what policy looks like globally, what bullying looks like globally, what healthy communication looks like globally is an open question.
And we're all figuring it out together, which is fascinating.
Yeah. I mean, with, you know, sort of fake news, for instance, and...
Deep fakes?
And fake news generated by humans?
Yeah, and we can talk about defects.
I think that is another very interesting level of complexity.
But if you think about just the written word, we invented papyrus, what, 3,000 years ago where you could put word on paper.
And then 500 years ago, we get the printing press, where the word gets a little bit more ubiquitous.
And then you really, really didn't get ubiquitous printed word until...
the end of the 19th century when the offset press was invented.
And then, you know, just sort of explodes and like, you know,
the cross product of that and the Industrial Revolution's need
for educated citizens resulted in like this rapid expansion of literacy and the rapid expansion of the word.
But like we had 3,000 years up to that point to figure out like
how to, you know, like what's journalism, what's editorial integrity, like what's, you know,
what's scientific peer review.
You built all of this mechanism to try to filter through all of the Noise that the technology made possible to like, you know, sort of getting to something that society could cope with.
And like, if you think about just the piece, the PC didn't exist 50 years ago.
And so in like this span of, you know, like half a century, like we've gone from no digital, you know, no ubiquitous digital technology to like having a device that sits in your pocket where you can sort of say whatever is on your mind to like...
What did Mary have?
Mary Meeker just released her new slide deck last week.
We've got 50% penetration of the internet to the global population.
There are 3.5 billion people who are connected now.
It's crazy.
It's inconceivable how fast all of this happens.
It's not surprising that we haven't figured out what to do yet, but we've got to really lean into this set of problems because we basically have three millennia worth of work to do about how to deal with all of this and probably what It amounts to the next decade worth of time.
So since we're on the topic of tough, challenging problems, let's look at more on the tooling side in AI that Microsoft is looking at as face recognition software.
So there's a lot of powerful positive use cases for face recognition, but there's some negative ones and we're seeing those in different governments in the world.
So how do you, how does Microsoft think about the use of face recognition software as a platform in governments and companies?
How do we strike an ethical balance here?
Yeah, I think we've articulated a clear point of view.
So Brad Smith wrote a blog post last fall, I believe, that sort of, like, outlined, like, very specifically what, you know, what our point of view is there.
And, you know, I think we believe that there are certain uses to which face recognition should not be put.
And we believe, again, that there's a need for regulation there, like the government should Like, really come in and say that, you know, this is where the lines are.
And like, we very much wanted to, like, figuring out where the lines are should be a democratic process.
But in the short term, like, we've drawn some lines where, you know, we push back against uses of Face recognition technology.
You know, like the city of San Francisco, for instance, I think has completely outlawed any government agency from using face recognition tech.
And like that may prove to be a little bit overly broad.
But for like certain law enforcement things, like you I would personally rather be overly cautious in terms of restricting use of it until we have defined a reasonable, democratically determined regulatory framework for where we could and should use it.
And the other thing there is We've got a bunch of research that we're doing and a bunch of progress that we've made on bias there.
And there are all sorts of weird biases that these models can have, all the way from the most noteworthy one where you may have underrepresented minorities who are underrepresented in the training data, and then you start learning strange things.
But there are even other weird things.
I think we've seen in the public research, models can learn strange things, like all doctors are men, for instance.
It really is a thing where it's very important for everybody who is working on these things before they push publish.
They launch the experiment, they push the code online, or they even publish the paper that they are At least starting to think about what some of the potential negative consequences are of some of this stuff.
I mean, this is where the deepfake stuff I find very worrisome, just because there are going to be some very good beneficial uses of GAN generated imagery.
And funny enough, one of the places where it's actually useful is we're using the technology right now to generate synthetic visual data for training some of the face recognition models to get rid of the bias.
So that's one super good use of the tech.
Right. It's getting good enough now where it's going to sort of challenge a normal human being's ability to – like now you just sort of say like it's very expensive for someone to fabricate a photorealistic fake video.
And like GANs are going to make it fantastically cheap to fabricate a photorealistic fake video.
And so like what you assume you can sort of trust is true versus like be skeptical about is about to change.
And like we're not ready for it, I don't think.
The nature of truth.
Right. It's also exciting because I think both you and I probably would agree that the way to solve, to take on that challenge is with technology.
Yeah. Right. There's probably going to be ideas of ways to verify which kind of video is legitimate, which kind of is not.
So, to me that's an exciting possibility, most likely for just the comedic genius that the internet usually creates with these kinds of videos.
And hopefully will not result in any serious harm.
Yeah, and it could be, you know, like I think we will have technology to That may be able to detect whether or not something's fake or real, although the fakes are pretty convincing, even when you subject them to machine scrutiny.
But, you know, we also have these increasingly interesting social networks, you know, that
are under fire right now for some of the bad things that they do.
Like, one of the things you could choose to do with a social network is, like, you could
use crypto and the networks to, like, have content signed where you could have a, like,
full chain of custody that accompanied every piece of content.
So, like, when you're viewing something and, like, you want to ask yourself, like, how,
you know, how much can I trust this?
You can click something and have a verified chain of custody that shows, oh, this is coming from this source and it's signed by someone whose identity I trust.
Yeah, I think having that chain of custody, being able to say, oh, here's this video, it may or may not have been produced using some of this deepfake technology.
But if you've got a verified chain of custody where you can sort of trace it all the way back to an identity and you can decide whether or not I trust this identity.
Like, oh no, this is really from the White House or this is really from the...
presidential candidate or it's really from, you know, Jeff Weiner, CEO of LinkedIn, or Satya Nadella, CEO of Microsoft.
Like, that might be like one way that you can solve some of the problems.
So, like, that's not the super high tech.
Like, we've had all of this technology forever.
And.
But I think you're right.
It has to be some sort of technological thing because the underlying tech that is used to create this is not going to do anything but get better over time and the genie is sort of out of the bottle.
There's no stuffing it back in.
And there's a social component, which I think is really healthy for a democracy, where people will be skeptical about the thing they watch in general, which is good.
Skepticism in general is good.
It is good, I think. So deepfakes in that sense are creating global skepticism about can they trust what they read.
It encourages further research.
I come from the Soviet Union.
Where basically nobody trusted the media because you knew it was propaganda.
And that kind of skepticism encouraged further research about ideas as opposed to just trusting any one source.
Well, I think it's one of the reasons why the scientific method and our apparatus of modern science is so good.
Because you don't have to trust Like, the whole notion of, you know, like, modern science beyond the fact that, you know, this is a hypothesis and this is an experiment to test the hypothesis and, you know, like, this is a peer review process for scrutinizing published results.
But, like... Stuff's also supposed to be reproducible.
So, you know it's been vetted by this process, but you also are expected to publish enough detail where if you are sufficiently skeptical of the thing, you can go try to reproduce it yourself.
I don't know what it is.
I think a lot of engineers are like this, where your brain is wired for skepticism.
You don't just first-order trust everything that you see and encounter, and you're curious to understand the next thing.
But I think it's an entirely healthy thing, and we need a little bit more of that right now.
So, I'm not a large business owner, so I'm just a huge fan of many of the Microsoft products.
Actually, in terms of I generate a lot of graphics and images, and I still use PowerPoint to do that.
It beats Illustrator for me, even professional.
It's fascinating. So, I wonder, what is the future of Let's say Windows and Office look like.
Do you see it? I mean, I remember looking forward to XP. Was it exciting when XP was released?
Just like you said, I don't remember when 95 was released, but XP for me was a big celebration.
And when 10 came out, I was like, okay, well, it's nice.
It's a nice improvement. So what do you see the future of these products?
You know, I think there's a bunch of exciting...
I mean, on the office front, there's going to be this, like, increasing productivity wins that are coming out of some of these AI-powered features that are coming.
Like, the products will sort of get smarter and smarter in, like, a very subtle way.
Like, there's not going to be this...
Big bang moment where Clippy is going to re-emerge and it's going to be- Wait a minute.
Okay, we'll have to wait, wait, wait.
Is Clippy coming back?
But quite seriously, so injection of AI, there's not much, or at least I'm not familiar, sort of assistive type of stuff going on inside the Office products, like a Clippy style assistant, personal assistant.
Do you think there's a possibility of that in the future?
So I think there are a bunch of very small ways in which machine learning powered assistive things are in the product right now.
So there are a bunch of interesting things.
The auto-response stuff's getting better and better, and it's getting to the point where it can auto-respond with, okay, this person's clearly trying to schedule a meeting, so it looks at your calendar and it automatically tries to find a time and a space that's mutually interesting.
We have...
This notion of Microsoft Search, where it's not just web search, but it's search across all of your information that's sitting inside of your Office 365 tenant and potentially in other products.
There's a thing called the Microsoft Graph that is basically an API federator that gets you hooked up across the entire breadth of what were information silos before they got woven together with the graph.
That is with increasing effectiveness plumbed into some of these autoresponse things where You're going to be able to see the system automatically retrieve information for you.
I frequently send out emails to folks where I can't find a paper or a document or whatnot.
There's no reason why the system won't be able to do that for you.
And I think it's building towards having things that look more like a fully integrated assistant.
But you'll have a bunch of steps that you will see before you It will not be this big bang thing where Clippy comes back and you've got this manifestation of a fully powered assistant.
So I think that's definitely coming in.
All of the collaboration, co-authoring stuff's getting better.
It's really interesting.
If you look at how we use the Office product portfolio at Microsoft, more and more of it is happening inside of Teams as a canvas.
And it's this thing where you've got Collaboration is at the center of the product, and we built some really cool stuff, which is about to be open source, that are sort of framework-level things for doing co-authoring.
That's awesome. Is there a cloud component to that on the web?
Forgive me if I don't already know this, but with Office 365, The collaboration we do in Word, we still send the file around.
We're already a little bit better than that.
The fact that you're unaware of it means we've got a better job to do, like helping you discover this stuff.
But yeah, I mean, it's already got a huge cloud component.
And part of this framework stuff, I think we're calling it...
We've been working on it for a couple of years, so I know the internal code name for it, but I think when we launched it at Build, it's called the Fluid framework.
But what Fluid lets you do is you can go into a conversation that you're having in Teams and reference part of a spreadsheet that You're working on where somebody's like sitting in the Excel canvas, like working on the spreadsheet with a, you know, chart or whatnot.
And like you can sort of embed like part of the spreadsheet in the Teams conversation where like you can dynamically update it and like all of the changes that you're making to this object are like, you know, and everything is sort of updating in real time.
So you can be in whatever canvas is most convenient for you to get your work done.
So, out of my own sort of curiosity as engineer, I know what it's like to sort of lead a team of 10, 15 engineers.
Microsoft has, I don't know what the numbers are, maybe 50, maybe 60,000 engineers, maybe 40.
A lot of engineers. I don't know exactly what the number is.
It's a lot. It's tens of thousands.
Right, this is more than 10 or 15.
I mean, you've led different sizes, mostly large sizes of engineers.
What does it take to lead such a large group into continue innovation, continue being highly productive and yet develop all kinds of new ideas and yet maintain?
What does it take to lead such a large group of brilliant people?
I think the thing that you learn as you manage larger and larger scale is that there are three things that are very, very important for big engineering teams.
Like one is like having some sort of forethought about what it is that you're going to be building
over large periods of time. Like not exactly like you don't need to know that like you know I'm
putting all my chips on this one product and like this is going to be the thing. But like it's
useful to know like what sort of capabilities you think you're going to need to have to build the
products of the future. And then like invest in that infrastructure like whether and I like I'm
I'm not just talking about storage systems or cloud APIs.
It's also, what does your development process look like?
What tools do you want?
What culture do you want to build around how you're collaborating together to make complicated technical things?
And so having an opinion and investing in that, it just gets more and more important.
And the sooner you can get a concrete set of opinions, the better you're going to be.
You can wing it for a while at small scales.
When you start a company, you don't have to be super specific about it.
The biggest miseries that I've ever seen as an engineering leader are in places where you didn't have a clear enough opinion about those things soon enough.
And then you just sort of go create a bunch of technical debt and culture debt that is excruciatingly painful to clean up.
So that's... One bundle of things.
Another bundle of things is just really, really important to have a clear mission that's not just some cute crap you say because you think you should have a mission, but something that clarifies for people where it is that you're headed together.
I know it's probably a little bit too popular right now, but Yuval Harari's book Sapiens, one of the central ideas in his book is that storytelling is the quintessential thing for coordinating the activities of large groups of people, like once you get past Dunbar's number.
And I've really, really seen that, just managing engineering teams.
You can just brute force things when you're less than 120, 150 folks where you can sort of know and trust and understand what the dynamics are between all the people.
But past that, things just sort of start to catastrophically fail if you don't have Some sort of set of shared goals that you're marching towards.
Even though it sounds touchy-feely, a bunch of technical people will balk at the idea that you need to have the missions very, very, very important.
You've always right. Stories, that's how our society, that's the fabric that connects us all of us is these powerful stories.
That works for companies too.
It works for everything.
Like, I mean, even down to, like, you know, you sort of really think about it.
Like, our currency, for instance, is a story.
Our constitution is a story.
Our laws are, I mean, like, we believe very, very, very strongly in them.
And thank God we do. But, like, they are, they're just abstract things.
Like, they're just words. Like, if we don't believe in them, they're nothing.
And in some sense, those stories are platforms and the kinds, some of which Microsoft is creating, right?
They have platforms on which we define the future.
So last question, what do you, let's get philosophical maybe, bigger than even Microsoft, what do you think the next 20, 30 plus years looks like for computing, for technology, for devices?
Do you have crazy ideas about the future of the world?
Yeah, look, I think we're entering this time where we have technology that is progressing at the fastest rate that it ever has, and you've got You get some really big social problems like society scale problems that we have to tackle.
And so, you know, I think we're going to rise to the challenge and, like, figure out how to intersect, like, all
of the power of this technology with all of the big challenges that are facing us, whether it's, you know,
global warming, whether it's, like, the biggest remainder of the population boom is in Africa for the next 50 years
or so.
And, like, global warming is going to make it increasingly difficult to feed the global population, in particular,
like, in this place where you're going to have, like, the biggest population boom.
I think we, you know, like, AI is going to, like, if we push it in the right direction, like, it can do, like,
incredible things to empower all of us to achieve our full potential and to, you know, like,
like, live better lives.
But, like, that also means focus on, like, some super important things, like how can you apply it to healthcare to make sure that, you know, like, our quality and cost of and sort of ubiquity of health coverage is better and better.
Over time, like that's more and more important every day is like in the United States and like the rest of the industrialized world.
So Western Europe, China, Japan, Korea, like you've got this population bubble of like aging, working, you know, working age folks who are Yeah, at some point over the next 20, 30 years, they're going to be largely retired and like you're going to have more retired people than working age people.
And then like you've got, you know, sort of natural questions about who's going to take care of all the old folks and who's going to do all the work.
And The answers to all of these sorts of questions, where you're running into constraints of the world and of society, has always been, what tech is going to help us get around this?
When I was a kid in the 70s and 80s, we talked all the time about population boom, population boom, we're going to We're not going to be able to feed the planet.
And we were right in the middle of the Green Revolution where this massive technology-driven increase in crop productivity worldwide.
And some of that was taking some of the things that we knew in the West and getting them distributed to the...
To the developing world.
And part of it were things like just smarter biology helping us increase.
And we don't talk about overpopulation anymore because we can more or less, we sort of figured out how to feed the world.
That's a technology story.
And so I'm super, super hopeful about the future and in the ways where We will be able to apply technology to solve some of these super challenging problems.
One of the things that I'm trying to spend my time doing right now is trying to get everybody else to be hopeful as well, because back to Harare, we are the stories that we tell.
If we get overly pessimistic right now about the The potential future of technology, we may fail to get all of the things in place that we need to have our best possible future.
And that kind of hopeful optimism, I'm glad that you have it because you're leading large groups of engineers that are actually defining, that are writing that story, that are helping build that future, which is super exciting.
And I agree with everything you said, except I do hope Clippy comes back.